Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by hubie on Monday March 02, @10:41AM   Printer-friendly

Trump Bans Anthropic AI From Federal Agencies After Firm Refuses to Unlock Capabilities

Anthropic cites risks of autonomous military applications, mass domestic surveillance:

President Donald Trump ordered every U.S. federal agency to stop using technology from AI company Anthropic on Friday, February 27, posting the directive to Truth Social at 3:47 PM ET — more than an hour before the Pentagon's own 5:01 PM ET deadline for Anthropic to comply with its demands.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS,” Mr. Trump fumed on Truth Social, adding that he is directing every U.S. federal agency to “IMMEDIATELY CEASE all use of Anthropic's technology.”

[...] After months of private talks collapsed into a public standoff this week, Amodei said Thursday his company "cannot in good conscience accede" to the DoD's terms. The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance and warned it would designate the company a "supply chain risk" — a label typically reserved for companies from adversarial nations such as Huawei.

[...] Claude was the only AI model approved for use in classified military systems, and defense software firm Palantir, which uses Claude to power its most sensitive government contracts, will need to find a replacement quickly. OpenAI CEO Sam Altman said Friday he shares Anthropic's position on autonomous weapons' ethical “red lines,” complicating its candidacy as a direct replacement.

Also see:
    • Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI
    • Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons

Meanwhile:

OpenAI to work with Pentagon after Anthropic dropped by Trump over company's ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance:

OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company's main competitors.

Sam Altman, OpenAI's CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI's agreement with the government included assurances that it would not be used to those ends.

[...] If OpenAI's deal does prohibit its systems from being used for unethical ends, it would appear the company has succeeded in receiving assurances where Anthropic could not. Altman announced the deal with the government shortly after the president said he would direct all federal agencies to "IMMEDIATELY CEASE" all use of Anthropic technology.

[...] It remains to be seen how OpenAI staff respond to the government deal. In its battle with the Trump administration, Anthropic has drawn support from its most fierce rivals. Nearly 500 OpenAI and Google employees signed on to an open letter saying "we will not be divided".

"The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the letter reads. "They're trying to divide each company with fear that the other will give in."


Original Submission #1Original Submission #2

This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Touché) by PiMuNu on Monday March 02, @11:58AM (9 children)

    by PiMuNu (3823) on Monday March 02, @11:58AM (#1435396)

    > THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT

    Translation:
    Anthropic has not donated enough to Trump campaign funds

    • (Score: 3, Funny) by Anonymous Coward on Monday March 02, @01:20PM

      by Anonymous Coward on Monday March 02, @01:20PM (#1435399)

      Re-translation: Little TACO throws a tantrum.

    • (Score: 3, Interesting) by OrugTor on Monday March 02, @02:50PM (6 children)

      by OrugTor (5147) Subscriber Badge on Monday March 02, @02:50PM (#1435408)

      It makes me wonder - how come the Religious Right has not come up with a bible-based AI? It would be a shoo-in for government contracts.

      • (Score: 3, Interesting) by Freeman on Monday March 02, @03:02PM (3 children)

        by Freeman (732) on Monday March 02, @03:02PM (#1435410) Journal

        Why would you even want that? Assuming you're saying that the "Religious Right" is conservative. Why would you think that a conservative religious group would want to go all in on a proven to be inconsistent and potentially harmful "AI"? You may think that people that are termed the "Religious Right" are stupid for various reasons. That doesn't make them actually stupid. Recently I watched "The Most Reluctant Convert: The Untold Story of C.S. Lewis". It's quite an interesting overview of C.S. Lewis's early life and apparently "most reluctant" conversion. I never really delved into his life or read a biography about him, it was a rather interesting watch. It's currently streaming on Amazon.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 5, Insightful) by mcgrew on Monday March 02, @04:03PM (1 child)

          by mcgrew (701) <publish@mcgrewbooks.com> on Monday March 02, @04:03PM (#1435421) Homepage Journal

          The "religious right" is an American thing. They most certainly do NOT follow Jesus! They're against food assistance, Jesus gave food away. They're against taxes, Jesus said "render unto Caesar that which is Ceasar's". They're against universal health care, Jesus healed the sick, lame, and blind for free. And he was executed for being "woke" ("woke" means "sacrilegious" to the right) by the religious conservatives.

          "Religious right" is a lie. The Christian Bible says that Satan is the father of lies. Ahem, Trump's lies...

          --
          Why do the mainstream media act as if Donald Trump isn't a pathological liar with dozens of felony fraud convictions?
        • (Score: 2) by Freeman on Tuesday March 03, @06:36PM

          by Freeman (732) on Tuesday March 03, @06:36PM (#1435595) Journal

          I stumbled across the following while researching something else. Indonesian AI Bible something or other: https://chatgpt.com/g/g-QjHkF2IEk-alkitab-gpt-ai-bible [chatgpt.com] Apparently somebody's already been there, done that, and has the t-shirt. Not that I have any idea what it's supposed to be/for/etc. Since the site I found it on is Indonesian.

          --
          Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 3, Interesting) by krishnoid on Monday March 02, @06:51PM (1 child)

        by krishnoid (1156) on Monday March 02, @06:51PM (#1435457)

        A "tooth for a tooth, eye for an eye"-based AI, or a "turn the other cheek"-based one? Or a "don't wear clothes of two different fabrics"-based one? Or maybe a digital twin [fountainmagazine.com] of Jesus?

        • (Score: 2) by Freeman on Monday March 02, @07:10PM

          by Freeman (732) on Monday March 02, @07:10PM (#1435459) Journal

          It could be interesting to see what an "AI" trained on the bible would spout. Maybe it would just start suggesting people to call down fire upon the evil doers? Perhaps suggest cutting themselves as Elijah taunted the prophets of Baal to do? I'm pretty sure it would end up more of an agent of chaos than a legitimately useful, biblically sound tool. It might start suggesting that fig leaves are the best covering.

          --
          Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 4, Insightful) by Spamalope on Monday March 02, @03:46PM

      by Spamalope (5233) on Monday March 02, @03:46PM (#1435418) Homepage

      Gov't users working on internal plans won't/cannot agree to 3rd party corpo oversight.

      Corpo virtue signals with corpo speak
      Politician virtue signals with political blather

      Shall I put on my 'I'm shocked' face, or wait for news @ 9?

      vaguely more seriously;
      Gov't factions, most likely in the State dept. are split on support for AI companies due to internal alliances. One got enough control to flip the contractor. We only know about it 'cause AI is getting click so this gets coverage.

  • (Score: 5, Interesting) by c0lo on Monday March 02, @02:02PM

    by c0lo (156) on Monday March 02, @02:02PM (#1435402) Journal

    Hegseth struggles with extra $500B/y [truthout.org]

    Pentagon officials are reportedly struggling to devise a plan to spend the extra $500 billion that US President Donald Trump wants to give the bloated, fraud-ridden agency in the next fiscal year, vindicating criticism of the funding proposal as immensely wasteful.

    The Washington Post reported over the weekend that “White House aides and defense officials have run into logistical challenges surrounding where to put the money, because the amount is so large.” The extra $500 billion, endorsed by the top Republican on the House Armed Services Committee, would push annual US military spending to a staggering $1.5 trillion after the Trump administration and congressional Republicans enacted unprecedented cuts to federal nutrition assistance and Medicaid last summer.

    --
    https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 2) by Username on Monday March 02, @03:01PM (6 children)

    by Username (4557) on Monday March 02, @03:01PM (#1435409)

    >technology would not be used for mass surveillance –

    I think any large country needs an intel agency. Not really sure how you can spy on other countries and it not being mass surveillance.

    > for autonomous weapons systems that can kill people without human input

    The target being human is an input. Seems like a silly condition. So it can only be used to kill people not machines?

    • (Score: 3, Interesting) by JoeMerchant on Monday March 02, @03:27PM (2 children)

      by JoeMerchant (3937) on Monday March 02, @03:27PM (#1435414)

      > autonomous weapons systems that can kill people without human input

      Gun turrets have had radar tracking since there was radar tracking to have... this is just another form of targeting assistance. Heat seeker and later generations of fire-and-forget tracking missiles are "autonomous weapons systems" which kill people in their targets without human input after they are launched. How long can the interval be between launching and killing before it's a problem? A bullet, once fired, is an autonomous weapon that later kills without further human input.

      When you're flying a drone on the other side of the world from Nellis and you "pull the trigger' - you want the drone to have some onboard targeting capability to stay on the target you previously designated.

      --
      🌻🌻🌻🌻 [google.com]
      • (Score: 2) by aafcac on Tuesday March 03, @10:53PM (1 child)

        by aafcac (17646) on Tuesday March 03, @10:53PM (#1435619)

        I think most people would be somewhat OK with that, provided they don't think they'll be on the other side. At least it helps cut down on the stray shots that wind up hitting unintended targets. But, there is an increasingly small gap between that and outright autonomous weapons.

        • (Score: 2) by JoeMerchant on Wednesday March 04, @01:21AM

          by JoeMerchant (3937) on Wednesday March 04, @01:21AM (#1435628)

          >outright autonomous weapons.

          I know that RoboCop's robot nemesis is wildly unpopular and that's the outright autonomous weapon we don't want, but really, you control those by not letting them out, once you let them out on the street you know they'll be screwing up and killing the wrong people sooner or later.

          Like ICE in Minneapolis.

          There's a lot to be said for automatic fire control systems reducing friendly fire incidents, but that will never be perfect either. And most of the world just doesn't "grok" friendly fire.

          --
          🌻🌻🌻🌻 [google.com]
    • (Score: 2) by mrpg on Monday March 02, @04:06PM

      by mrpg (5708) <mrpgNO@SPAMsoylentnews.org> on Monday March 02, @04:06PM (#1435423) Homepage

      The idea is to kill some people, not any one.

    • (Score: 3, Informative) by Anonymous Coward on Monday March 02, @06:28PM (1 child)

      by Anonymous Coward on Monday March 02, @06:28PM (#1435454)

      "how you can spy on other countries and it not being mass surveillance."

      Because in the Before Times there were laws forbidding NSA from being used for domestic spying.

      And other countries did it for us to get around that little problem.

      • (Score: 4, Informative) by canopic jug on Tuesday March 03, @05:23AM

        by canopic jug (3949) on Tuesday March 03, @05:23AM (#1435515) Journal

        Because in the Before Times there were laws forbidding NSA from being used for domestic spying.

        Those were strict laws and were actually enforced ... for a while.

        The relevant whistleblowers there are Bill Binney and Thomas Drake. Content covering them, their actions, and especially their motives, has been more or less erased from YouTube. You can find decent written summaries about why they are censored and persecuted at the few higher quality sites which do cover government whistleblowers.

        --
        Money is not free speech. Elections should not be auctions.
  • (Score: 5, Insightful) by mcgrew on Monday March 02, @04:06PM (1 child)

    by mcgrew (701) <publish@mcgrewbooks.com> on Monday March 02, @04:06PM (#1435422) Homepage Journal

    Trump wants Skynet. Anthropic won't build it for them, so they're pissed.

    --
    Why do the mainstream media act as if Donald Trump isn't a pathological liar with dozens of felony fraud convictions?
    • (Score: 1, Touché) by Anonymous Coward on Tuesday March 03, @12:59AM

      by Anonymous Coward on Tuesday March 03, @12:59AM (#1435496)

      This is presumably with guardrails:
      https://www.theregister.com/2026/02/25/ai_models_nuclear/ [theregister.com]

      Google's Gemini 3 Flash, Anthropic's Claude Sonnet 4, and OpenAI's GPT-5.2 repeatedly escalated to nuclear use in a series of crisis simulations. That may seem like the most shocking conclusion of King's College London Professor Kenneth Payne's recent work, but it's not. Far more striking is why the models talked themselves into destroying the world, which was what Payne set up his study to learn.

      "I wanted to see what my AI leaders thought about their enemy ... so I designed a simulation to explore exactly that," Payne wrote in a recent blog post describing his project and its outcome.

(1)