First Draft of First Chapter of My Book “Understanding AI”

So I have decided to write a book. Crazy, perhaps. It is something I have wanted to do for years and faced many false starts. I think a lot of people have been through that. It’s not easy, but I knew that. It’s actually harder than you’d ever imagine, at least for me, but then again, this is the first time I have truly committed to the pursuit.

The reason I choose to write this is that AI is the most poorly understood topic I have ever seen, and has been wrapped in mythology. This is especially true with “AI Doom.” The problem with these beliefs is that they are difficult to refute without teaching a full lesson on AI. So, I guess that’s what I’m going to have to do! I’m not complaining, however. The fact is, there is really no in-depth guide for smart people who want to truly understand the concept of artificial intelligence.

I want to be clear that this is a first draft of the first chapter. It will be revised, probably several times. It currently lacks citations, but those will be added. Citations will be minimal because this is not a scholarly work, but should still have basic source credibility. Additionally, this chapter contains assertions which some might call bold or unsupported. There’s a reason for that and the reason is that this is just the first chapter.

Subsequent chapters will explore the history of AI as a concept and technology, including the pre-history and how formal logic rules, early computers and language itself set the stage for a new form of data processing, which we call artificial intelligence. It goes on to explain the theory of deep learning and neural networks, why this approach won out, and will take readers all the way to understanding natural language processing.

It’s not designed to be easy or dumbed down. However, it is intended to be fully complete for any lay person to fully understand why AI is the way it is, how we got her and where it is likely to go next. It is also not entirely technical. It features historic concept, public perception, regulatory issues and economic realities.

I also want to add that there are a number of people out there who are writing hot takes on AI that are completely uninformed, and this is intended to be the opposite of that. It’s down to earth, skeptical and presents the truth about artificial intelligence.

The working title is “Understanding AI.”


AND HERE IT IS!

Continue reading

Should Chatbots Refuse to Give High Risk Advice?

Chatbots are becoming increasingly popular. ChatGPT, for example, has nearly a billion weekly users. These LLM based services are used for all kinds of things, including many things their initial developers never dreamed of: planning, brainstorming, writing, translation, companionship, functional play, humor, studying, reformatting text, creating code. People also ask chatbots for all kinds of advice and facts. Chatbots have become the goto answer engines for questions ranging from “What is the capital of Chad?” to “How long should I boil a lobster for?”

However, there is a problem with LLMs and that is that they “hallucinate.” The term hallucination is a bit of a misnomer, because what is actually happening has less to do with figments of the imagination and more to do with patterns and probabilities. What actually happens is that the LLM confabulates a response that fits the patterns of a valid response but is not factually accurate. This often happens due to the model lacking information on a topic, but it can happen even when the model does have the knowledge in its training data.

No, it is not this kind of hallucination…

Hallucinations are impossible to completely eliminate from large language models. They are as much a feature as a bug, because the ability to create false information is inseparable from the model’s ability to generate fiction and hypotheticals or engage in role playing. It’s the nature of LLMs as stochastic probability engines. The only real way to eliminate hallucinations is to have some sort of output pipeline that involves checking and verification of outputs. That’s not something that chatbots currently do.

This is well understood and documented, but that does not change the fact that hallucinations continue to slip past people and be believed. A number of high profile events have included false citations in scientific journals, fake caselaw presented in court and medical advice for diseases that don’t even exist. One of the problems here is that people tend to believe the results a computer gives them, because in the past computers have been reliable and deterministic.

Continue reading

Anthropic and the Pentagon Situation

It was just last week that I posted a brief write-up about the situation with Anthropic and the Department of Defense. At the time, it seemed like the worst thing that might happen to Anthropic was a loss of military contracts, but things have escalated. The Pentagon and the Trump administration have ordered the discontinuation of Anthropic products by government agencies and contractors.

This is highly unusual and an extremely aggressive move. Anthropic has received a groundswell of public support, and OpenAI has been getting a lot of criticism for stepping in and signing a major contract as soon as Anthropic was excluded.

We Need To Take AI Doomerism Seriously

No, not the actual belief. The idea that AI will become a cartoon supervillain and wipe out humanity is as idiotic as it sounds. The danger is that the belief is getting credibility, is taken seriously and is a dangerous red hearing. The grifter economy of AI doom is a self-serving scam, but the consequences are real.

I personally find AI Doom to be more than a nuisance. Most do not realize this but, when you peel away the curtain, the movement is actually based on a strong cult-like belief and has spawned some extremely disturbing rhetoric. There have been threats against AI labs and ridiculous proposals for legislation to pause or stop AI research. There are all kinds of ridiculous claims being made and they are getting media attention> Almost nobody seems to be aware of the true nature of this ridiculous idea.

Most AI experts have disengaged from the nonsense of AI doom. After all, it’s not like it’s interesting and nobody wants to have to deal with their area of expertise being stepped on by people who don’t know what they’re talking about. However, this is dangerous. Doom movements have grown around other technologies: nano technology, vaccines, genetic engineering, nuclear energy and others. What we know is that these movements, unhinged and unsupported though they may be, do not go away and frequently lead to bad legislation and major problems for industries that do not fight back.

AI doom is an especially prevalent threat and it’s receiving mainstream legitimacy and attention, which should be seen as a major problem in and of itself.

What is AI Doom

AI Doom is the basic idea that there is some kind of unique and existential threat to humanity, based on the idea that AI systems and ML models might become “super intelligent” and therefore impossible to stop from causing harm. This is also predicated on the idea that in addition to intelligence being a scaler, which can be arrived at through increased compute, that the resulting intelligence would somehow achieve autonomy and become either hostile to humans or desire to eliminate humans to gain more resources.

It’s not an entirely new idea. It’s been a trope in science fiction for some time. It is based on a lot of universal fears, like not understanding technology, being replaced, dehumanization and the fact that people have been so conditioned by science fiction to expect such an outcome to be reasonable.

Doom literally refers to an “end of the world” scenario, but there are other doom adjacent beliefs and claims, such as the idea that AI will lead to a permanent dystopian society where employment is impossible and power is consolidated or that AI may enslave humanity.

Importantly, while there are absolutely risks of a variety of types that are associated with AI adoption, the idea of a species-level risk from the technology gaining self-motivation and setting its own goals is not plausible at all.

Continue reading

Anthropic Faces Challenges From Pentagon Requirements

I have been critical of Anthropic before. The company rose quickly and is run primarily by founders who do not come from a conventional business leadership background. The company is governed with a strong spirit of ethics and stewardship.

While I have found their message to be a bit unprofessional, speculative and over the top at times, there’s no doubt that it’s honorable for a company to put its own ethics above lucrative business deals. As so many large corporations support ICE actions and government overreach, it’s nice to see a company that still is willing to stand up and do the right thing.

When it comes to using technology by militaries, the ethics get dicey fast. Is it okay to use technology for purely defensive roles? What if it is offensive, but in a justified conflict? Is it okay if it results in more deaths on the other side? What if a weapon is powerful but its impacts are based on how it is used? Should our commanders be trusted to use technology ethnically? Is it patriotic to provide tech to the military, because it may save out servivcepeople?

These are not easy questions, and companies grapple with them all the time. Some companies are card carrying defense contractors, and that’s just what they do. But war is an unusual situation: The aim is to kill people and cause maximum destruction. That’s at odds with most corporate ethics.

Continue reading

New Youtube Channel and Focus on AI

Artificial Intelligence has more mysticism than just about any other subject out there. I’ve never seen any subject so poorly understood and so sensationalized. It’s a technology that everyone seems to realize is big, revolutionary and important. But that’s only resulted in a huge amount of mythology.

Few people understand AI from a technical perspective, but just about everyone *thinks* they understand it, because it seems so intuitive. It seems like you can just talk to it and it understands, so the implications are obvious, right?

Right now the world has a deficiency in AI experts who understand the tech, and even fewer are mature risk managers. That’s resulted in a lot of skewing toward sensationalism. Most AI leaders are not even knowledge about the tech and the media rewards a high drama narrative. There are a few ways it skews: one of the most ridiculous messages is AI doomerism, the idea that AI might wipe out humanity. It’s cartoonish but it receives more attention than it should. There are also claims of permanent unemployment. On the other end is AI utopianism. There are also those insisting AI might become conscious or a moral patient. Yes, this is also being taken seriously.

It’s really a subject that attracts all kinds. But few people realize that like any technology, AI and ML have fundamental limits and capabilities. They’re not magic. But the recent AI summit in India would have you think otherwise, with ubiquitous claims of being close to superintelligence.

And so, as one of the few AI technical experts willing to address this problem I have launched a new YouTube Channel and will be focusing primarily on this topic. AI risks, mitigations, technology and truth: AI Sanity, on Youtube.

A Risk-Oriented Hierarchy of Intervention in the Deployment and Customization of Large Language Models

A practical and pragmatic discussion of the levels of risk and complexity in the customization of large language models. Many organizations are using LLM technology to build customized chatbots, RAG tools and content generators. However, many organizations do not have a full understanding of the options and levels of risk and development complexity that come from LLM customization and deployment.

In the contemporary landscape of artificial intelligence deployment, a structural shift is occurring: base models are becoming increasingly capable out of the box. Instruction-following performance, contextual reasoning, retrieval integration, and domain adaptability have improved to such a degree that many historical justifications for invasive model modification are steadily eroding. This evolution necessitates a corresponding philosophical and governance framework—one grounded in the principle that greater customization introduces greater uncertainty, greater liability, and a proportionally greater need for validation and risk controls.

At its core, the responsible deployment of large language models should be guided by a hierarchy of invasiveness. Each successive layer of intervention introduces deeper system coupling, increased behavioral unpredictability, and escalating regulatory, operational, and reputational risk. Accordingly, risk management should not begin at the level of model alteration, but rather at the least invasive layers of interaction and configuration.

Continue reading

The Narrative About AI Triggered Job Loss is Speculative and Irresponsible

We are seeing an increased public narrative about the potential for job losses from AI deployment. These claims receive a great deal of media attention and are rewarded in the social media landscape for being as pessimistic as possible. Mass job loss remains highly speculative and many claims skew to the highly implausible. But this is causing mass harm.

The increasingly popular narrative of inevitable, catastrophic, long-term job loss due to artificial intelligence is not grounded in robust empirical evidence. It is overwhelmingly speculative, framed in worst-case abstractions, and presented to the public with a level of certainty that far exceeds what the data justifies. That alone would be intellectually questionable. But the deeper issue is ethical: the psychological and social harm caused by repeatedly presenting extreme scenarios as near-certainties.

There is a very real human cost to this discourse. People are not reading these forecasts as academic hypotheticals. They are internalizing them as personal futures. Students reconsider career paths. Mid-career professionals experience anxiety and loss of motivation. Workers in already uncertain labor markets feel prematurely obsolete. This is not a trivial side effect. It is a measurable psychological burden placed on millions of people based on projections that remain deeply uncertain and, in many cases, methodologically weak.

Serious economic forecasting requires discipline, historical grounding, and humility about technological diffusion. What we are instead seeing in many public conversations is a pattern of extrapolation from capability demos directly to labor market collapse, skipping entirely over the realities of workflow integration, governance constraints, liability frameworks, organizational inertia, and economic adaptation. That is not analysis. That is narrative acceleration.

Continue reading

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 1

Due to the length of this detailed topic, it will be broken into multiple parts. One of the reasons this post is so long is the extreme entrenchment of incorrect views, and therefore, a need to provide detailed explanations of why they are wrong.

As written about earlier, Warren Buffet is one of the worst out there when it comes to spreading misinformation and unnecessary alarm about cyber security risks. He’s not the only one, however. There seems to be an incessant and rather insane cry of “Well, there are third party risks and they could be systemic. Lets throw our hands up in the air and say there is nothing we can do.”

Of course, this is not the case, in the finite and artificial world of cyber security, no risk is insurmountable and all can be understood. Third party risks come from the fact that so many organizations are dependent on various third parties, such as vendors and contractors. Even clients and customers can be a third party risk, because some organizations rely on a relatively limited number of clients.

In this video-accompanied post, I will do my best to provide detailed information to refute this dangerous and deeply entrenched idea.

Lets be clear on something, this is not new or unique to cyber:
There is nothing new or novel about this concept at all. Some policyholders have always been dependent on a limited number of vendors or service providers. Even in the years before cyber security, a major failing of the power grid, as happened in 2003 and 1977, can cause widespread loss across a large area. A single storm can impact a huge area, or a bad hurricane season can bring devastating storms to a large area. That’s what a systemic risk is.

However, in cyber security, all systemic risks can easily be detected ahead of time, if we care to look. They’re artificial, based on the relationships we choose to have and the artificial, man-made, engineered systems we use with the human-created, anthropogenic, artificial, man-made, ARTIFICIAL RISKS. And therefore finite and easy to understand. It’s always easy to know your risks, when they are in engineered systems you own, right?

Continue reading

Cyber Insurance Applications Revealed

The moral failing of insurance that pays ransom regularly, makes no attempt not to, affirmatively disengages leaders and funds terrorism should be obvious, but many argue with me, stating that insurers are doing the best they can, have incomplete data, or that they are improving.

Unfortunately, they’re not. There have been a few small measures taken, mostly just in terms of wording changes. Not a dime has been invested in enforcement or compliance management.

To show how negligent these insurance companies have been, it’s important to take a look at the applications they have for cyber insurance. These applications represent all that these companies have, in terms of policy controls. It’s abundantly clear that no adult with any idea how any of this works wrote these. There is never any other enforcement. Even large clients do not receive independent assessments or audits. These “requirements” are not generally enforceable, do not create a call to action and, just plain won’t ever work. Money will continue to be lost until even the most minimal efforts to do otherwise are made.

Cyber insurance is considered a loss center (for some reason) and for this reason it gets zero investment and the underwriters who end up on this line are typically the lowest achievers. That’s truly the opposite of what is needed here.

These applications seem to be current, although some have not been updated in years. I do not think it is at all unreasonable to say that those who were responsible for writing the loss controls, for an insurance that paid extortion, to foreign hostile parties, should face some kind of criminal charges. This is not normal. This is not okay. It should not be normalized to have such clueless people, when professionals are avaliable.

Check out this PDF to get an idea of just how bad this situation is.

BREAKDOWN OF CYBER INSURANCE APPLICATIONS

HSB Total Cyber Insurance Application
AIG’S CYBER UNDERWRITING APPLICATION
Travelers CyberRisk Applications and Forms
Chubb Cyber And Privacy Insurance
Beazley Cyber Application
The Hartford CyberChoice Premier Application
FailSafe Cyber / Information Risk Supplement Application