First Draft of First Chapter of My Book “Understanding AI”

So I have decided to write a book. Crazy, perhaps. It is something I have wanted to do for years and faced many false starts. I think a lot of people have been through that. It’s not easy, but I knew that. It’s actually harder than you’d ever imagine, at least for me, but then again, this is the first time I have truly committed to the pursuit.

The reason I choose to write this is that AI is the most poorly understood topic I have ever seen, and has been wrapped in mythology. This is especially true with “AI Doom.” The problem with these beliefs is that they are difficult to refute without teaching a full lesson on AI. So, I guess that’s what I’m going to have to do! I’m not complaining, however. The fact is, there is really no in-depth guide for smart people who want to truly understand the concept of artificial intelligence.

I want to be clear that this is a first draft of the first chapter. It will be revised, probably several times. It currently lacks citations, but those will be added. Citations will be minimal because this is not a scholarly work, but should still have basic source credibility. Additionally, this chapter contains assertions which some might call bold or unsupported. There’s a reason for that and the reason is that this is just the first chapter.

Subsequent chapters will explore the history of AI as a concept and technology, including the pre-history and how formal logic rules, early computers and language itself set the stage for a new form of data processing, which we call artificial intelligence. It goes on to explain the theory of deep learning and neural networks, why this approach won out, and will take readers all the way to understanding natural language processing.

It’s not designed to be easy or dumbed down. However, it is intended to be fully complete for any lay person to fully understand why AI is the way it is, how we got her and where it is likely to go next. It is also not entirely technical. It features historic concept, public perception, regulatory issues and economic realities.

I also want to add that there are a number of people out there who are writing hot takes on AI that are completely uninformed, and this is intended to be the opposite of that. It’s down to earth, skeptical and presents the truth about artificial intelligence.

The working title is “Understanding AI.”


AND HERE IT IS!

Continue reading

Should Chatbots Refuse to Give High Risk Advice?

Chatbots are becoming increasingly popular. ChatGPT, for example, has nearly a billion weekly users. These LLM based services are used for all kinds of things, including many things their initial developers never dreamed of: planning, brainstorming, writing, translation, companionship, functional play, humor, studying, reformatting text, creating code. People also ask chatbots for all kinds of advice and facts. Chatbots have become the goto answer engines for questions ranging from “What is the capital of Chad?” to “How long should I boil a lobster for?”

However, there is a problem with LLMs and that is that they “hallucinate.” The term hallucination is a bit of a misnomer, because what is actually happening has less to do with figments of the imagination and more to do with patterns and probabilities. What actually happens is that the LLM confabulates a response that fits the patterns of a valid response but is not factually accurate. This often happens due to the model lacking information on a topic, but it can happen even when the model does have the knowledge in its training data.

No, it is not this kind of hallucination…

Hallucinations are impossible to completely eliminate from large language models. They are as much a feature as a bug, because the ability to create false information is inseparable from the model’s ability to generate fiction and hypotheticals or engage in role playing. It’s the nature of LLMs as stochastic probability engines. The only real way to eliminate hallucinations is to have some sort of output pipeline that involves checking and verification of outputs. That’s not something that chatbots currently do.

This is well understood and documented, but that does not change the fact that hallucinations continue to slip past people and be believed. A number of high profile events have included false citations in scientific journals, fake caselaw presented in court and medical advice for diseases that don’t even exist. One of the problems here is that people tend to believe the results a computer gives them, because in the past computers have been reliable and deterministic.

Continue reading

We Need To Take AI Doomerism Seriously

No, not the actual belief. The idea that AI will become a cartoon supervillain and wipe out humanity is as idiotic as it sounds. The danger is that the belief is getting credibility, is taken seriously and is a dangerous red hearing. The grifter economy of AI doom is a self-serving scam, but the consequences are real.

I personally find AI Doom to be more than a nuisance. Most do not realize this but, when you peel away the curtain, the movement is actually based on a strong cult-like belief and has spawned some extremely disturbing rhetoric. There have been threats against AI labs and ridiculous proposals for legislation to pause or stop AI research. There are all kinds of ridiculous claims being made and they are getting media attention> Almost nobody seems to be aware of the true nature of this ridiculous idea.

Most AI experts have disengaged from the nonsense of AI doom. After all, it’s not like it’s interesting and nobody wants to have to deal with their area of expertise being stepped on by people who don’t know what they’re talking about. However, this is dangerous. Doom movements have grown around other technologies: nano technology, vaccines, genetic engineering, nuclear energy and others. What we know is that these movements, unhinged and unsupported though they may be, do not go away and frequently lead to bad legislation and major problems for industries that do not fight back.

AI doom is an especially prevalent threat and it’s receiving mainstream legitimacy and attention, which should be seen as a major problem in and of itself.

What is AI Doom

AI Doom is the basic idea that there is some kind of unique and existential threat to humanity, based on the idea that AI systems and ML models might become “super intelligent” and therefore impossible to stop from causing harm. This is also predicated on the idea that in addition to intelligence being a scaler, which can be arrived at through increased compute, that the resulting intelligence would somehow achieve autonomy and become either hostile to humans or desire to eliminate humans to gain more resources.

It’s not an entirely new idea. It’s been a trope in science fiction for some time. It is based on a lot of universal fears, like not understanding technology, being replaced, dehumanization and the fact that people have been so conditioned by science fiction to expect such an outcome to be reasonable.

Doom literally refers to an “end of the world” scenario, but there are other doom adjacent beliefs and claims, such as the idea that AI will lead to a permanent dystopian society where employment is impossible and power is consolidated or that AI may enslave humanity.

Importantly, while there are absolutely risks of a variety of types that are associated with AI adoption, the idea of a species-level risk from the technology gaining self-motivation and setting its own goals is not plausible at all.

Continue reading

Where We Really Stand In AI Capabilities

The recent talk of AGI, as if it is some kind of impending certainty, and now talk about “Superintelligence” is really causing a great deal of confusion. The reality is that we are nowhere near the point of human level intelligence in all domains, the idea of artificial super intelligence, is entirely speculative and nowhere near foreseeable capabilities, and you can’t scale past the limits of current AI systems. The truth has been lost in a sea of sensational rhetoric.

The modern public discourse around artificial intelligence began with a fundamental shift in frame of reference. For decades, AI systems were narrow, technical, and largely invisible to the general public. Then, quite suddenly, natural language processing systems emerged with startling fluency. For the first time, people could interact with a machine through conversational language that resembled human dialogue.

This single development reset public intuition overnight.

Instead of being understood as statistical systems operating within defined computational constraints, large language models were immediately interpreted through the lens of science fiction archetypes: conversational minds, digital assistants, synthetic intellects. The resemblance in surface behavior was compelling enough to override the underlying reality of how these systems actually function.

But fluency is not cognition. Simulation of reasoning is not reasoning itself.

Continue reading

A Risk-Oriented Hierarchy of Intervention in the Deployment and Customization of Large Language Models

A practical and pragmatic discussion of the levels of risk and complexity in the customization of large language models. Many organizations are using LLM technology to build customized chatbots, RAG tools and content generators. However, many organizations do not have a full understanding of the options and levels of risk and development complexity that come from LLM customization and deployment.

In the contemporary landscape of artificial intelligence deployment, a structural shift is occurring: base models are becoming increasingly capable out of the box. Instruction-following performance, contextual reasoning, retrieval integration, and domain adaptability have improved to such a degree that many historical justifications for invasive model modification are steadily eroding. This evolution necessitates a corresponding philosophical and governance framework—one grounded in the principle that greater customization introduces greater uncertainty, greater liability, and a proportionally greater need for validation and risk controls.

At its core, the responsible deployment of large language models should be guided by a hierarchy of invasiveness. Each successive layer of intervention introduces deeper system coupling, increased behavioral unpredictability, and escalating regulatory, operational, and reputational risk. Accordingly, risk management should not begin at the level of model alteration, but rather at the least invasive layers of interaction and configuration.

Continue reading

Update on Drone Hysteria With Video

This truly appears to be primarily and perhaps completely caused by mass hysteria and not any actual drone swarm of any kind. There remains the possibility that there were unauthorized drones in sensitive areas, but that does not appear to account for most of the reports.

After spending hours looking for any videos of the supposed drones in New Jersey and elsewhere, I was surprised to find that the overwhelming majority of the sightings seem to be clear, unambiguous and completely doubtless examples of civil and commercial aircraft.


This reminds me very much of the battle of Los Angeles, which was not actually a real battle but rather just an example of similar mass hysteria. We are seeing similar hallmarks to previous flying phenomena hysteria, including a mushrooming number of reports and increasing drama as more and more people are convinced that drones have crashed, are attacking or something else.

An Underwriters Guide to Cyber Risk: Managing 3rd Party Risk – Part 3

Due to the length of this detailed topic, it will be broken into multiple parts. Previous portions here:

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 1
An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 2

Technical Approvals in Cybersecurity: A Missing Pillar of Risk Management

In traditional industries, technical approval processes are a vital part of ensuring safety and reliability. For example, companies often pay to have their devices tested and approved by organizations like UL, which rigorously test products to ensure safety and reliability. Safety-critical devices—such as fire alarms, fire pumps, and safety doors—require approval before being used by insured parties, giving insurers the confidence that these devices will perform when needed.

Cybersecurity, however, lacks a similar robust system of technical approvals. Without an established process, standards in cybersecurity are often vague and difficult to enforce. For instance, many standards simply state that an organization must have a “firewall” or use “industry-standard encryption.” These requirements are difficult to enforce because they are vague—what exactly qualifies as an acceptable firewall, and who verifies it? There are many products that could meet these requirements on paper, but without an approval process, there is no consistent or provable standard of quality.

Technical approvals are ultimately an absolutely necessary step to establishing universally high standards. This is what will, eventually, end the problem of high levels of third party risk forever. It is an unavoidable part of standardizing risk management in technology and reign in losses. It will, unfortunately, be difficult to make great progress in cyber security until such time as a robust system of independent testing and approval is established. This will create the “ecosystem of trust” that is necessary to enforce security.

A Well-Established and Necessary Process
It is unusual that cybersecurity proceeds without technical approval, but this reflects an outdated mindset in IT, where buyers assume all risk without warranties or guarantees. Technical approval is a well-established process in many industries, providing independent verification that a product meets specific standards and ensuring accountability. It is no longer the 1980’s, and software and IT products are no longer specialty products or experimental, but this mentality still persists.

Continue reading

Just How Bad Are We Doing With Cyber Security? Lets look at the past week…

So just how bad is ransomware and cyber security in general? To get an impression, lets look at the past week. Just over the past 7 days, there have been over a dozen major ransomware attacks, though a few have not been well reported in the news media. The fact is, we have fallen for a kind of creeping normality. It’s not normal and it should not be considered a routine thing to see this happen.

Starbucks Impacted By Cyber Attack
Stop & Shop Hit By Cyber Incident – May Result In Bare Shelves
Supply Chain Management Vendor Blue Yonder Succumbs to Ransomware
The City of Odessa, TX Experiences a Cyber Incident
Weeks Later, Problems Persist At Hannaford Supermarkets
Wirral University Teaching Hospital Experiences Major Cyber Incident
Retailers Struggle After Attack on Supply Chain Provider Blue Yonder
RRCA Accounts Management Falls Victim to Play Ransomware Attack
Aspen Healthcare Services Announces Data Breach
Zyxel Firewalls Targeted in Recent Ransomware Attacks
Fintech Giant Finastra Investigates Data Breach

Continue reading

Why is Ransom Paid? Panic, Perverse Incentives and Bluffs. 

It is rarely in the best interest of the victim to pay ransom! Although the narrative often is “Because they have no choice” or “It is to protect people from the leak.” This is a complete myth, and it tends to be advanced by those who have paid ransom before, as a way of covering their terrible and avoidable behavior. Nobody owns this untrue narrative more than the insurance underwriters who normalized this behavior.

The problem with something like ransomware is that most companies are willing to pay ransom, and as long as this remains true it will be a persistent problem and only get worse.  Ransomware has become so entrenched and is so easy and cheap to pull off, it will not subside until it becomes substantially difficult to succeed in a ransomware attack and make money doing so.  Unfortunately, there have been no efforts to reduce ransom payments.

It is important to never forget exactly what is paid for, with money American companies pay
(Source)

When ransomware gangs lock down a system, they are frequently the first people the victims hear from and they will do their best to instill fear, create panic and make the situation seem much worse than it is.  They will often claim that they will soon delete the data or raise their price for restoration.  Paying for data restoration is never necessary, if even the most basic of precautions have been taken to back up data, but that is often not the cast and 80% of organizations facing ransomware do not have adequate backups.   The situation is common, though always avoidable, and at least half of ransom payments are motivated primarily by the need to release systems and have data returned, not to avoid it leaking.

In many cases, companies have felt it was more reliable or faster to pay ransom, and with gangs so skilled at instilling fear and manipulating American companies, it is not uncommon.  In some cases insurers have even insisted that victims pay ransom against their will.  HSB is one of the few that still does this, forcing victims to pay ransom even if they felt it was not necessary, simply because the insurance company felt it was cheaper or safer to do so. However, the practice has never gone away completely from most insurers. Because the claims staff frequently receive kickbacks, they will tell organizations they are best off paying, even when they are not.

Unfortunately, it is not cheaper or safer to do so, and this is especially true if you do have backed up data.  The restored data is 100% assured to be contaminated with malware and backdoors and the incident response will be far worse off. Paying ransom almost doubles the average cost of cleaning up an incident in the end.  It also dramatically increases the chances of future attacks.

Continue reading

An Underwriters’ Guide to Cyber Risk: Managing 3rd Party Risk – Part 2

Understanding and Preventing Zero Day And Other Software Supply Chain Attacks

This is the second post in the series, intended to help better understand how third party risks can be managed, and addressing the problem of misinformation from high raking sources. Because of the pervasiveness of the myth that third party risks are unmanageable, primarily due to the insistence by insurance executives that “Well I don’t understand it, and therefore it can’t be done.” But because of this toxic insistence, it is necessary to break things down and provide detailed supporting information.

In this post, we will look at zero days and unpatched vulnerabilities as a type of exposure to third party risk. Zero days are similar to supply chain attacks, and many of the same methods for controlling zero days apply to supply chain attacks as well. MOVEit is an example of a zero day attack, which caused massive damage to the US and global economy. It illustrates exactly how these attacks work.

In some ways, it was the kind of systemic attack that insurers are constantly complaining about. However, it also illustrates all the ways the damage could have been prevented. MOVEit was bad, but it was also tragic, because so much of the loss could have been prevented, if we had our act together on this.

Continue reading