Anthropic Faces Challenges From Pentagon Requirements

I have been critical of Anthropic before. The company rose quickly and is run primarily by founders who do not come from a conventional business leadership background. The company is governed with a strong spirit of ethics and stewardship.

While I have found their message to be a bit unprofessional, speculative and over the top at times, there’s no doubt that it’s honorable for a company to put its own ethics above lucrative business deals. As so many large corporations support ICE actions and government overreach, it’s nice to see a company that still is willing to stand up and do the right thing.

When it comes to using technology by militaries, the ethics get dicey fast. Is it okay to use technology for purely defensive roles? What if it is offensive, but in a justified conflict? Is it okay if it results in more deaths on the other side? What if a weapon is powerful but its impacts are based on how it is used? Should our commanders be trusted to use technology ethnically? Is it patriotic to provide tech to the military, because it may save out servivcepeople?

These are not easy questions, and companies grapple with them all the time. Some companies are card carrying defense contractors, and that’s just what they do. But war is an unusual situation: The aim is to kill people and cause maximum destruction. That’s at odds with most corporate ethics.

This has caused a major conflict for Anthropic. Like many companies, it’s understandable that they simply do not want to have their tech used for harming others. But as with many others, operationally the consequences become more complex. In this case it’s caused major problems with the Pentagon.

Via CNN:

Anthropic rejects latest Pentagon offer: ‘We cannot in good conscience accede to their request’

Anthropic is rejecting the Pentagon’s latest offer to change their contract, saying the changes do not satisfy the company’s concerns that AI could be used for mass surveillance or in fully autonomous weapons.

The Pentagon and Anthropic are at odds over restrictions the company places on the use of Claude, the first AI system to be used in the military’s classified network.

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that if Anthropic does not allow its AI model to be used “for all lawful purposes,” the Pentagon would cancel Anthropic’s $200 million contract. In addition to the contract cancellation, Anthropic would be deemed a “supply chain risk,” a classification normally reserved for companies connected to foreign adversaries, Pentagon officials said.

Anthropic said in a statement that the Pentagon’s new language was framed as a compromise but “was paired with legalese that would allow those safeguards to be disregarded at will.”

In a lengthy blog post on Thursday, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.”

Amodei said Anthropic understands that the Pentagon, “not private companies, makes military decisions.” But “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” He also said use cases like mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.”

Anthropic’s two exceptions have not slowed “adoption and use of our models within our armed forces to date,” Amodei added.

I have to say that I admire Anthropic’s commitment to principal, but they will likely end up having to come to some harder compromises. While it may seem simple to limit the use of a technology, AI is very broad in use cases, so if it is used at all by the military, it will likely be used in diverse applications. There’s also the potential that it could be used by defense contractors, that it may be used in top secret settings that Anthropic may have little say in.

Google already found this out the hard way. Eventually they softened some internal requirements that limited their technology from use in military applications. 200 million in contracts might sound like a lot, but Anthropic can easily walk away from that. The bigger question is future opportunities, which can be many billions. It is also likely that this question will continue to dog them,

Importantly, while it’s understandable that a company would not want to get involved in waging wars or causing harm, the fact is that AI is not magic and won’t change the battlefield that much. People have become afraid of the idea of killer drone swarms, robots marching around and making life and death decisions. The phrase “AI decides who gets to live and who dies” has been making the rounds. In reality, that’s not how modern wars are waged. Autonomous weapons already exist, and the military understands that they are can be especially dangerous in edge cases where friendly fire can happen.

AI won’t change that. There will still be land mines, sea mines, fire and forget missiles and drones that operate with varying levels of autonomy. What is more likely is that AI will continue to automate decision making and will be used to process vast amounts of data. For the military, that is really where AI shines: Not in controlling weapons but processing diverse battlefield data in real time. The downside of this is that it allows forces to track individuals, monitor private communications and surveil whole groups in ways that were not previously possible,

Is that use ethical? Is military use ever okay? Should companies restrict the circumstances where the technology is used? That’s something that every company needs to decide for themselves. Anthropic is going through one of the first struggles a maturing company needs to face. It won’t be the last.

One thing that should be noted is that the Pentagon has been extremely aggressive with their demands. Normally, they’d only risk losing contracts, but Defense Secretary Hegseth has taken things further.

VIa Politico

‘Incoherent’: Hegseth’s Anthropic ultimatum confounds AI policymakers 

Defense Secretary Pete Hegseth’s ultimatum to the artificial intelligence startup Anthropic is sparking shock and confusion among lawyers and AI policymakers, who accuse the Pentagon of making contradictory threats as it pressures the company to lift restrictions on the use of its powerful AI model.

Hegseth met with Anthropic CEO Dario Amodei on Tuesday to deliver a warning  give the military unfettered access to its Claude AI model by Friday evening or else have the government label it a “risk” to the supply chain. The designation, typically reserved for foreign firms with ties to U.S. adversaries, could ban companies that work with the government from partnering with Anthropic.

But Hegseth simultaneously threatened to invoke the Cold War-era Defense Production Act to compel Anthropic to work with the Defense Department and nix the company’s ethical red lines, which include restrictions on using Claude to surveil U.S. citizens or empower autonomous weapons. The government used the law during the Covid-19 pandemic to accelerate production of medical supplies and vaccines.

Dean Ball, a former AI adviser in the Trump administration, said the Pentagon is contradicting itself by forcing Anthropic to cooperate with the government even as it moves to label the company a security risk.

This may come down to the fact that AI technology is seen as a panacea, a nearly magic technology that the military can win wars with. However, to be far, Anthropic does produce unique and capable technology that is certainly valuable in miliary applications. There has been a strong strategic tone about the importance of AI as a strategic advantage for nation states.

In light of this, it’s perhaps not surprising that such escalations have happened. This is a perfect storm for conflict: Idealistic, new companies, a powerful technology, ideas of strategic dominance, concerns over other countries getting ahead. AI companies may have not intended it, but they have walked into a difficult political and philosophical battle, which will only get worse in the years to come.

Leave a Reply

Your email address will not be published. Required fields are marked *