Anthropic draws fire from White House with AI warnings

Anthropic has been a rare voice within the artificial intelligence (AI) industry that warns about the negative aspects of the technology it develops and supports regulation – a stance that has recently angered the Trump administration and its allies in Silicon Valley.

While the AI ​​company has sought to outline areas of alignment with the administration, White House officials who favor a more pragmatic approach to AI have dismissed the company’s calls for caution.

“If you have a prominent member of the industry come out and say, ‘Not so much. It’s OK that we get regulated. We need to figure this out at some point,’ then it makes everyone in the industry look selfish,” said Kirsten Martin, dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.

“The best thing that can happen for the industry is that the story depends on everyone in the industry being in line,” he said.

This tension became clear earlier this month when Anthropic co-founder Jack Clark shared a recent speech on “technological optimism and justified fear.” He offered the analogy of a child sitting in a dark room who is afraid of the mysterious figures around him who appear as innocuous objects in the light.

“Now, in the year 2025, we are the children of that story and the room is our planet,” he said. “But when we turn on the light we find ourselves looking at true beings, the powerful and somewhat unpredictable AI systems of today and tomorrow.”

Clark continued, “And there are a lot of people who want to believe that these creatures are nothing more than a pile of clothes placed on a chair, or a bookshelf, or a lampshade.” “And they want us to turn off the lights and go back to sleep.”

Clark’s comments were met with a sharp rebuke from White House AI and crypto czar David Sachs, who accused Anthropic of “running a sophisticated regulatory capture strategy based on fear-mongering” and “fostering the state regulatory frenzy that is harming the startup ecosystem.”

He was joined by colleagues like venture capitalist Marc Andreessen, who responded to the post on social platform X with “the truth.” Sunny Madra, chief operating officer and president of AI chip startup Grok, also suggested that “one company is creating chaos for the entire industry.”

Sriram Krishnan, senior White House policy adviser for AI, criticized the reaction to Sachs’ post from the AI ​​security community and argued that the country should instead focus on competing with China.

Sachs later doubled down on his disappointment with Anthropic, alleging that it has been the company’s “government affairs and media strategy to consistently position itself as an enemy of the Trump administration.”

He pointed to past comments by Anthropic CEO Dario Amodei, in which he reportedly criticized President Trump, as well as op-eds that Sachs described as “attacking” the president’s tax and spending bill, Middle East deals and chip export policies.

“This is a free country and Anthropic is welcome in its ideas,” Sachs said. “Oppose us all you want. We are the party that supports free speech and open debate.”

Amodei responded last week to what he called a “recent increase in inaccurate claims about Anthropic’s policy stances,” arguing that the AI ​​firm and the administration largely agree on AI policy.

“I fully believe that leaders across the anthropic, administration, and political spectrum want the same thing: ensuring that powerful AI technologies benefit the American people and that America maintains its lead in AI development,” he wrote in a blog post.

He cited the $200 million Defense Department contract Anthropic received earlier this year, in addition to the company’s support for Trump’s AI action plan and other AI-related initiatives.

Amodei also acknowledged that the company “respectfully disagrees” with a provision in Trump’s tax cut and spending megabill that called for a 10-year moratorium on state AI legislation.

In a New York Times op-ed in June, he described this pressure as “understandable”, but argued that the pause was “too blunt” amid the rapid development of AI, stressing that there was “no clear plan” at the federal level. This provision was ultimately removed from the bill by a 99–1 vote in the Senate.

He pointed to similar concerns about the lack of movement on federal AI regulation in the company’s decision to support California Senate Bill 53, a state measure that requires AI firms to release security information. The bill was signed late last month by California Gov. Gavin Newsom (D).

“Anthropic is committed to constructive engagement on matters of public policy,” Amodei said. “When we agree, we say so. When we disagree, we propose an alternative for consideration. We do this because we are a public benefit corporation whose mission is to ensure that AI benefits everyone, and because we want to maintain America’s lead in AI.”

The recent spat with administration officials underscores Anthropic’s unique approach to AI in the current environment. Amodei, Clark, and several other former OpenAI employees founded the AI ​​Lab in 2021, focusing on security. It remains central to the company and its policy ideas.

“Its reputation and its brand is about risk appetite,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.

Kreps said this has set Anthropic apart amid a growing shift toward an accelerated approach to AI, both inside and outside the industry.

“The anthropological approach has been fairly consistent,” he said. “In some ways, what has changed is the rest of the world, and [that]This also includes the US, which is bullish on AI, and a change in the White House, where the message is towards acceleration rather than regulation.

In a change from its predecessor, the Trump administration has placed a heavy emphasis on eliminating regulations that it believes could stifle innovation and cause the US to fall behind China in the AI ​​race.

This has created tensions with states, particularly California, which have sought to pass new AI rules that could lead the way for the rest of the country.

“I don’t think there’s anything right or wrong about it. It’s just a level of risk aversion and risk acceptance,” Kreps said. “If you’re in Europe, it’s much more risk-averse. If you’re in the U.S. two years ago, it’s more risk-averse. And now, it’s just an outlook that accepts a little more risk.”

Source link

Please follow and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *