We Need To Take AI Doomerism Seriously

No, not the actual belief. The idea that AI will become a cartoon supervillain and wipe out humanity is as idiotic as it sounds. The danger is that the belief is getting credibility, is taken seriously and is a dangerous red hearing. The grifter economy of AI doom is a self-serving scam, but the consequences are real.

I personally find AI Doom to be more than a nuisance. Most do not realize this but, when you peel away the curtain, the movement is actually based on a strong cult-like belief and has spawned some extremely disturbing rhetoric. There have been threats against AI labs and ridiculous proposals for legislation to pause or stop AI research. There are all kinds of ridiculous claims being made and they are getting media attention> Almost nobody seems to be aware of the true nature of this ridiculous idea.

Most AI experts have disengaged from the nonsense of AI doom. After all, it’s not like it’s interesting and nobody wants to have to deal with their area of expertise being stepped on by people who don’t know what they’re talking about. However, this is dangerous. Doom movements have grown around other technologies: nano technology, vaccines, genetic engineering, nuclear energy and others. What we know is that these movements, unhinged and unsupported though they may be, do not go away and frequently lead to bad legislation and major problems for industries that do not fight back.

AI doom is an especially prevalent threat and it’s receiving mainstream legitimacy and attention, which should be seen as a major problem in and of itself.

What is AI Doom

AI Doom is the basic idea that there is some kind of unique and existential threat to humanity, based on the idea that AI systems and ML models might become “super intelligent” and therefore impossible to stop from causing harm. This is also predicated on the idea that in addition to intelligence being a scaler, which can be arrived at through increased compute, that the resulting intelligence would somehow achieve autonomy and become either hostile to humans or desire to eliminate humans to gain more resources.

It’s not an entirely new idea. It’s been a trope in science fiction for some time. It is based on a lot of universal fears, like not understanding technology, being replaced, dehumanization and the fact that people have been so conditioned by science fiction to expect such an outcome to be reasonable.

Doom literally refers to an “end of the world” scenario, but there are other doom adjacent beliefs and claims, such as the idea that AI will lead to a permanent dystopian society where employment is impossible and power is consolidated or that AI may enslave humanity.

Importantly, while there are absolutely risks of a variety of types that are associated with AI adoption, the idea of a species-level risk from the technology gaining self-motivation and setting its own goals is not plausible at all.

Continue reading

Anthropic Faces Challenges From Pentagon Requirements

I have been critical of Anthropic before. The company rose quickly and is run primarily by founders who do not come from a conventional business leadership background. The company is governed with a strong spirit of ethics and stewardship.

While I have found their message to be a bit unprofessional, speculative and over the top at times, there’s no doubt that it’s honorable for a company to put its own ethics above lucrative business deals. As so many large corporations support ICE actions and government overreach, it’s nice to see a company that still is willing to stand up and do the right thing.

When it comes to using technology by militaries, the ethics get dicey fast. Is it okay to use technology for purely defensive roles? What if it is offensive, but in a justified conflict? Is it okay if it results in more deaths on the other side? What if a weapon is powerful but its impacts are based on how it is used? Should our commanders be trusted to use technology ethnically? Is it patriotic to provide tech to the military, because it may save out servivcepeople?

These are not easy questions, and companies grapple with them all the time. Some companies are card carrying defense contractors, and that’s just what they do. But war is an unusual situation: The aim is to kill people and cause maximum destruction. That’s at odds with most corporate ethics.

Continue reading

New Youtube Channel and Focus on AI

Artificial Intelligence has more mysticism than just about any other subject out there. I’ve never seen any subject so poorly understood and so sensationalized. It’s a technology that everyone seems to realize is big, revolutionary and important. But that’s only resulted in a huge amount of mythology.

Few people understand AI from a technical perspective, but just about everyone *thinks* they understand it, because it seems so intuitive. It seems like you can just talk to it and it understands, so the implications are obvious, right?

Right now the world has a deficiency in AI experts who understand the tech, and even fewer are mature risk managers. That’s resulted in a lot of skewing toward sensationalism. Most AI leaders are not even knowledge about the tech and the media rewards a high drama narrative. There are a few ways it skews: one of the most ridiculous messages is AI doomerism, the idea that AI might wipe out humanity. It’s cartoonish but it receives more attention than it should. There are also claims of permanent unemployment. On the other end is AI utopianism. There are also those insisting AI might become conscious or a moral patient. Yes, this is also being taken seriously.

It’s really a subject that attracts all kinds. But few people realize that like any technology, AI and ML have fundamental limits and capabilities. They’re not magic. But the recent AI summit in India would have you think otherwise, with ubiquitous claims of being close to superintelligence.

And so, as one of the few AI technical experts willing to address this problem I have launched a new YouTube Channel and will be focusing primarily on this topic. AI risks, mitigations, technology and truth: AI Sanity, on Youtube.

Where We Really Stand In AI Capabilities

The recent talk of AGI, as if it is some kind of impending certainty, and now talk about “Superintelligence” is really causing a great deal of confusion. The reality is that we are nowhere near the point of human level intelligence in all domains, the idea of artificial super intelligence, is entirely speculative and nowhere near foreseeable capabilities, and you can’t scale past the limits of current AI systems. The truth has been lost in a sea of sensational rhetoric.

The modern public discourse around artificial intelligence began with a fundamental shift in frame of reference. For decades, AI systems were narrow, technical, and largely invisible to the general public. Then, quite suddenly, natural language processing systems emerged with startling fluency. For the first time, people could interact with a machine through conversational language that resembled human dialogue.

This single development reset public intuition overnight.

Instead of being understood as statistical systems operating within defined computational constraints, large language models were immediately interpreted through the lens of science fiction archetypes: conversational minds, digital assistants, synthetic intellects. The resemblance in surface behavior was compelling enough to override the underlying reality of how these systems actually function.

But fluency is not cognition. Simulation of reasoning is not reasoning itself.

Continue reading

A Risk-Oriented Hierarchy of Intervention in the Deployment and Customization of Large Language Models

A practical and pragmatic discussion of the levels of risk and complexity in the customization of large language models. Many organizations are using LLM technology to build customized chatbots, RAG tools and content generators. However, many organizations do not have a full understanding of the options and levels of risk and development complexity that come from LLM customization and deployment.

In the contemporary landscape of artificial intelligence deployment, a structural shift is occurring: base models are becoming increasingly capable out of the box. Instruction-following performance, contextual reasoning, retrieval integration, and domain adaptability have improved to such a degree that many historical justifications for invasive model modification are steadily eroding. This evolution necessitates a corresponding philosophical and governance framework—one grounded in the principle that greater customization introduces greater uncertainty, greater liability, and a proportionally greater need for validation and risk controls.

At its core, the responsible deployment of large language models should be guided by a hierarchy of invasiveness. Each successive layer of intervention introduces deeper system coupling, increased behavioral unpredictability, and escalating regulatory, operational, and reputational risk. Accordingly, risk management should not begin at the level of model alteration, but rather at the least invasive layers of interaction and configuration.

Continue reading

The Narrative About AI Triggered Job Loss is Speculative and Irresponsible

We are seeing an increased public narrative about the potential for job losses from AI deployment. These claims receive a great deal of media attention and are rewarded in the social media landscape for being as pessimistic as possible. Mass job loss remains highly speculative and many claims skew to the highly implausible. But this is causing mass harm.

The increasingly popular narrative of inevitable, catastrophic, long-term job loss due to artificial intelligence is not grounded in robust empirical evidence. It is overwhelmingly speculative, framed in worst-case abstractions, and presented to the public with a level of certainty that far exceeds what the data justifies. That alone would be intellectually questionable. But the deeper issue is ethical: the psychological and social harm caused by repeatedly presenting extreme scenarios as near-certainties.

There is a very real human cost to this discourse. People are not reading these forecasts as academic hypotheticals. They are internalizing them as personal futures. Students reconsider career paths. Mid-career professionals experience anxiety and loss of motivation. Workers in already uncertain labor markets feel prematurely obsolete. This is not a trivial side effect. It is a measurable psychological burden placed on millions of people based on projections that remain deeply uncertain and, in many cases, methodologically weak.

Serious economic forecasting requires discipline, historical grounding, and humility about technological diffusion. What we are instead seeing in many public conversations is a pattern of extrapolation from capability demos directly to labor market collapse, skipping entirely over the realities of workflow integration, governance constraints, liability frameworks, organizational inertia, and economic adaptation. That is not analysis. That is narrative acceleration.

Continue reading

Update on Drone Hysteria With Video

This truly appears to be primarily and perhaps completely caused by mass hysteria and not any actual drone swarm of any kind. There remains the possibility that there were unauthorized drones in sensitive areas, but that does not appear to account for most of the reports.

After spending hours looking for any videos of the supposed drones in New Jersey and elsewhere, I was surprised to find that the overwhelming majority of the sightings seem to be clear, unambiguous and completely doubtless examples of civil and commercial aircraft.


This reminds me very much of the battle of Los Angeles, which was not actually a real battle but rather just an example of similar mass hysteria. We are seeing similar hallmarks to previous flying phenomena hysteria, including a mushrooming number of reports and increasing drama as more and more people are convinced that drones have crashed, are attacking or something else.

Could Drones Over New Jersey Be a Case of Mass Hysteria?

It’s far from certain, and there are a few cases that appear to be legitimate drone sightings, but a large number also appear to be civilian aircraft or other mistakes. At least some, thought perhaps not all reports are a case of panic.

If you have not been living under a rock, you are probably aware that people around New Jersey, and now elsewhere are up in arms over reported sightings of drones. Drone sightings are not at all unusual in the year 2024, but these include reports of drones over sensitive military facilities and critical infrastructure, such as reservoirs and power plants. These reports started coming in around November 13th and have gotten more and more extreme as time has gone on.

At present, a number of elected officials, such as mayors, the governor and police chiefs have voices concern. A great deal of drama is now under way, while officials are demanding answers from the FBI, military or others. Many are calling for the drones to be shot down.

The problem is we still don’t actually have any answers as to what is happening and the reports are fragmented and inconsistent. With time, confusion has only increased and primary evidence of documentation has been lacking.

Now similar reports are being made across the Northeast. At first it was claimed that the drones were “spreading to New York.” Now they claim to have been seen across the Northeast and the US in general.

Here is what seems to have been reported:

  • It has been reported that the drones are only out at night, reportedly appearing at dusk and not being seen during the day.
  • Many of the drones have lights on them, in some cases the lights are strobes or other standard hazard and navigational lights.
  • There have been reports of bright lights and drones that are highly visible and not trying to be stealthy.
  • The drones have been reported over restricted areas, such as Trump-owned property, military installations and airports.
  • Air traffic, including a medical helicopter have had to be diverted due to concerns over drone collisions.
  • Their origin, flight paths and landing locations remains elusive.
  • There are unconfirmed reports of drones switching off lights or otherwise trying to hide when pursued.
  • Many have claimed that the drones are enormous in size, frequently described as the size of an SUV or larger.
  • Reports imply the same drones remain in the sky for hours and travel great distances.

It should be noted that such large and capable drones do exist and are available for purchase. The reports of drones “The size of an SUV” or “8 feet in diameter,” if true, do imply that these are not consumer drones, but rather larger, higher capacity drones. Such drones do exist and are used in agriculture, surveying and other professional pursuits. It’s also possible that a large experimental drone could be constructed by hobbyists, as parts and supplies to build large drones do exist.

Continue reading

An Underwriters Guide to Cyber Risk: Managing 3rd Party Risk – Part 3

Due to the length of this detailed topic, it will be broken into multiple parts. Previous portions here:

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 1
An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 2

Technical Approvals in Cybersecurity: A Missing Pillar of Risk Management

In traditional industries, technical approval processes are a vital part of ensuring safety and reliability. For example, companies often pay to have their devices tested and approved by organizations like UL, which rigorously test products to ensure safety and reliability. Safety-critical devices—such as fire alarms, fire pumps, and safety doors—require approval before being used by insured parties, giving insurers the confidence that these devices will perform when needed.

Cybersecurity, however, lacks a similar robust system of technical approvals. Without an established process, standards in cybersecurity are often vague and difficult to enforce. For instance, many standards simply state that an organization must have a “firewall” or use “industry-standard encryption.” These requirements are difficult to enforce because they are vague—what exactly qualifies as an acceptable firewall, and who verifies it? There are many products that could meet these requirements on paper, but without an approval process, there is no consistent or provable standard of quality.

Technical approvals are ultimately an absolutely necessary step to establishing universally high standards. This is what will, eventually, end the problem of high levels of third party risk forever. It is an unavoidable part of standardizing risk management in technology and reign in losses. It will, unfortunately, be difficult to make great progress in cyber security until such time as a robust system of independent testing and approval is established. This will create the “ecosystem of trust” that is necessary to enforce security.

A Well-Established and Necessary Process
It is unusual that cybersecurity proceeds without technical approval, but this reflects an outdated mindset in IT, where buyers assume all risk without warranties or guarantees. Technical approval is a well-established process in many industries, providing independent verification that a product meets specific standards and ensuring accountability. It is no longer the 1980’s, and software and IT products are no longer specialty products or experimental, but this mentality still persists.

Continue reading

Just How Bad Are We Doing With Cyber Security? Lets look at the past week…

So just how bad is ransomware and cyber security in general? To get an impression, lets look at the past week. Just over the past 7 days, there have been over a dozen major ransomware attacks, though a few have not been well reported in the news media. The fact is, we have fallen for a kind of creeping normality. It’s not normal and it should not be considered a routine thing to see this happen.

Starbucks Impacted By Cyber Attack
Stop & Shop Hit By Cyber Incident – May Result In Bare Shelves
Supply Chain Management Vendor Blue Yonder Succumbs to Ransomware
The City of Odessa, TX Experiences a Cyber Incident
Weeks Later, Problems Persist At Hannaford Supermarkets
Wirral University Teaching Hospital Experiences Major Cyber Incident
Retailers Struggle After Attack on Supply Chain Provider Blue Yonder
RRCA Accounts Management Falls Victim to Play Ransomware Attack
Aspen Healthcare Services Announces Data Breach
Zyxel Firewalls Targeted in Recent Ransomware Attacks
Fintech Giant Finastra Investigates Data Breach

Continue reading