• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2024

help-circle
  • Our migration was a mess. And took a long time. I don’t know how much our contracted company was at fault. They certainly didn’t do a good job. We have Jira extended for time management to billing and staff pay and whatnot.

    I have some CSS Hacks to make the cloud version usable, but the DOM is a mess. Only test id attributes are reasonable, stable, and descriptive. Everything else is random in terms of class and id.

    Occasionally, something changes. Despite a dedicated maintenance window by Atlassian, and marketing towards predictiveness and all that positive stuff, occasionally something changes without warning, without announcement. And you’re left wondering - is my memory getting that bad? Is this new?

    My last highlight is that they converted migrated images in Jira ticket descriptions into some square image control. Something you can’t even use for new images. Pasting or dropping an image into the description will lead to something different. When it’s attached as an attachment, like it was in the past, you can only include it into the description as a fixed attachment either inline control or inline fixed preview control.

    If you have an old description with rectangular screenshots, you know, possible because you have a widescreen monitor, or because we have width space and make use of it for content, the square adds a ton of whitespace. Make the image big enough to be readable, and the only thing on your entire screen is the image and dead space, half of the height dead space.

    There’s many annoying and horrendous things.

    Worst is we contracted some third party for a custom menu and whatnot. We have a browser extension for that, for Jira and Confluence. I have all three functionality sets disabled because it makes it even slower or broken.

    It works for the most part, but man, there’s so many irritations and annoyances.


  • evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content

    robots.txt - the well known technology to block bad-intention bots /s

    What’s automated about the licensing layer? At some point, I started skimming the article. They didn’t seem clear about it. The AI can “automatically” parse it?

    # NOTICE: all crawlers and bots are strictly prohibited from using this 
    # content for AI training without complying with the terms of the RSL 
    # Collective AI royalty license. Any use of this content for AI training 
    # without a license is a violation of our intellectual property rights.
    
    License: https://rslcollective.org/royalty.xml
    

    Yeah, this is as useless as I thought it would be. Nothing here is actively blocking.

    I love that the XML then points to a text/html content website. I guess nothing for machine parsing, maybe for AI parsing.

    I don’t remember which AI company, but they argued they’re not crawlers but agents acting on the users behalf for their specific request/action, ignoring robots.txt. Who knows how they will react. But their incentives and history is ignoring robots.txt.

    Why am I is this comment so negative. Oh well.


  • That assumes the salt was also compromised/extracted. Unfortunately, they don’t say. Which one could read as not compromised. But they’re not transparently explicit about it.

    I was surprised they didn’t recommend changing passwords elsewhere, too. I would also prefer them to be transparent about how they were vulnerable/attacked.







  • I’m not sure if it was in that article or in another comment section, but someone said

    a small group of people will fight to control the narrative so they can spin it any which way they want.

    Your source for your broad categorization and claims seems incredibly weak. “Someone said, somewhere, I’m not sure where I read it, though.”

    Wikipedia tracks anonymous contributions, too. You could check the Article and Article Discussion pages histories before making these claims, and before concluding from one comment that Wikipedia has the same systematic issues like Reddit or other closed-group moderated platforms.

    As far as I see it, Wikipedia has a different depth and transparency on guidelines, requirements, open discussion, and actions. It has a lot of additional safeguards compared to something like Reddit. Admins are elected, not “first-come”.

    What I find much more plausible than “they didn’t want to accept an anonymous contribution” is that the anonymous contributor may not have adequately sourced their claims and contributions. Even if they did, I find it much more likely that it may have been removed, then a discussion was done in the page discussion, and then it was added back.

    Of course, instead of theorizing what happened in that case I could have checked Wikipedia too. But I also want to make a point about my general and systematic expectation of how Wikipedia works, which other platforms do not have.



  • What makes you say so?

    They saw potential in Rust for safety and technical guarantees, and started the Servo project. Eventually, they integrated some things into Gecko, and then concluded the Servo project.

    What makes you think they don’t want Gecko anymore? What makes you say they started Servo when it’s a partially integrated and, more importantly, a concluded project?




  • The stake will be paid for through $5.7 billion in grants previously awarded to Intel under the 2022 U.S. CHIPS and Science Act, plus $3.2 billion awarded to the company as part of a program called Secure Enclave. It’s a formerly classified initiative that Congress appropriated funds for in 2024 after lobbying by Intel, Politico reported in 2024.

    Including $2.2 billion in CHIPs grants Intel has received so far, the total investment is $11.1 billion, or 9.9%. Intel is valued at about $108 billion on the stock market.




  • I found the intro hook intriguing, but the reporting starts with a lot of media clips and other run-ups, which eventually made me leave.

    It’s great they put in so much effort into genuine, on-site reporting, but the already long video report feels even more bloated/filled this way.

    I have to wonder if the DMCA was due to the news clips. While they may be fair use for contextualized reporting, I didn’t find them particularly valuable, and DMCA issues could have been avoided without them or without using so many of them.




  • for example, “have seen revenues jump from zero to $20 million in a year,” he said. “It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

    Sounds like they were able to sell their AI services. That doesn’t really measure AI success, only product market success.

    Celebrating a revenue jump from zero, presumably because they did not exist before, is… quite surprising. It’s not like they became more efficient thanks to AI.