Anthropic and the Pentagon Situation

It was just last week that I posted a brief write-up about the situation with Anthropic and the Department of Defense. At the time, it seemed like the worst thing that might happen to Anthropic was a loss of military contracts, but things have escalated. The Pentagon and the Trump administration have ordered the discontinuation of Anthropic products by government agencies and contractors.

This is highly unusual and an extremely aggressive move. Anthropic has received a groundswell of public support, and OpenAI has been getting a lot of criticism for stepping in and signing a major contract as soon as Anthropic was excluded.

Anthropic Faces Challenges From Pentagon Requirements

I have been critical of Anthropic before. The company rose quickly and is run primarily by founders who do not come from a conventional business leadership background. The company is governed with a strong spirit of ethics and stewardship.

While I have found their message to be a bit unprofessional, speculative and over the top at times, there’s no doubt that it’s honorable for a company to put its own ethics above lucrative business deals. As so many large corporations support ICE actions and government overreach, it’s nice to see a company that still is willing to stand up and do the right thing.

When it comes to using technology by militaries, the ethics get dicey fast. Is it okay to use technology for purely defensive roles? What if it is offensive, but in a justified conflict? Is it okay if it results in more deaths on the other side? What if a weapon is powerful but its impacts are based on how it is used? Should our commanders be trusted to use technology ethnically? Is it patriotic to provide tech to the military, because it may save out servivcepeople?

These are not easy questions, and companies grapple with them all the time. Some companies are card carrying defense contractors, and that’s just what they do. But war is an unusual situation: The aim is to kill people and cause maximum destruction. That’s at odds with most corporate ethics.

Continue reading

The Narrative About AI Triggered Job Loss is Speculative and Irresponsible

We are seeing an increased public narrative about the potential for job losses from AI deployment. These claims receive a great deal of media attention and are rewarded in the social media landscape for being as pessimistic as possible. Mass job loss remains highly speculative and many claims skew to the highly implausible. But this is causing mass harm.

The increasingly popular narrative of inevitable, catastrophic, long-term job loss due to artificial intelligence is not grounded in robust empirical evidence. It is overwhelmingly speculative, framed in worst-case abstractions, and presented to the public with a level of certainty that far exceeds what the data justifies. That alone would be intellectually questionable. But the deeper issue is ethical: the psychological and social harm caused by repeatedly presenting extreme scenarios as near-certainties.

There is a very real human cost to this discourse. People are not reading these forecasts as academic hypotheticals. They are internalizing them as personal futures. Students reconsider career paths. Mid-career professionals experience anxiety and loss of motivation. Workers in already uncertain labor markets feel prematurely obsolete. This is not a trivial side effect. It is a measurable psychological burden placed on millions of people based on projections that remain deeply uncertain and, in many cases, methodologically weak.

Serious economic forecasting requires discipline, historical grounding, and humility about technological diffusion. What we are instead seeing in many public conversations is a pattern of extrapolation from capability demos directly to labor market collapse, skipping entirely over the realities of workflow integration, governance constraints, liability frameworks, organizational inertia, and economic adaptation. That is not analysis. That is narrative acceleration.

Continue reading