• BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      edit-2
      2 days ago

      Reminder that this is a disingenuous portrayal of events.

      The reason why Anthropic can’t supply the US military, or any part of the US government, is because they objected to Claude being used to choose military targets and refused to support how the fascists were using it. They are suing for the non-military branches of the government to be allowed to use the technology again after the fascists retaliated for their refusal to be in bed with fascists.

      • 3abas@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        If you’re going to fact check someone in defense of a corporation, at least check the facts your self. https://www.anthropic.com/news/where-stand-department-war

        Anthropic absolutely is in bed with fascists, their objection isn’t about the use of Claude to identify targets, it is explicitly about it being able to engage targets. They are totally fine with their AI identifying a school full of children as a terrorist command base as long as a human Nazi pushes the “fire” button. They’re well aware the human Nazis aren’t checking the AI’s work and the purpose of the AI is to identify targets that lead to heavy casualties, so the human Nazis don’t have to manually scan a map and cross reference it with Intel, the point is speed and they get to say AI did it when they blow up a school.

        Anthropic is proud to be part of the genocide in Gaza, and wants to be part of future wars and genocides. “Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.” https://www.anthropic.com/news/statement-comments-secretary-war

        And their objection is that their AI isn’t reliable enough not to engage American fighters by accident. They want fully autonomous weapons: “Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.” https://www.anthropic.com/news/statement-department-of-war

        You feel free to believe it’s all about civilians, but they didn’t make a fuss or pull out of using AI for war when it repeatedly identified children as targets, they only object to allowing Claude to also engage.

        The fascists aren’t upset anthropic’s ai won’t let them identify children as targets, they’re upset it won’t also execute them.

        You’re disingenuously portraying them as refusing to choose targets, which is exactly what they wanted from this whole drama.

        They wanted confusion in the air and people to defend them, because they have their manufactured reputation to protect. They’re not a moral AI company, they just want people to think (and repeat) that they are.