First Draft of First Chapter of My Book “Understanding AI”

So I have decided to write a book. Crazy, perhaps. It is something I have wanted to do for years and faced many false starts. I think a lot of people have been through that. It’s not easy, but I knew that. It’s actually harder than you’d ever imagine, at least for me, but then again, this is the first time I have truly committed to the pursuit.

The reason I choose to write this is that AI is the most poorly understood topic I have ever seen, and has been wrapped in mythology. This is especially true with “AI Doom.” The problem with these beliefs is that they are difficult to refute without teaching a full lesson on AI. So, I guess that’s what I’m going to have to do! I’m not complaining, however. The fact is, there is really no in-depth guide for smart people who want to truly understand the concept of artificial intelligence.

I want to be clear that this is a first draft of the first chapter. It will be revised, probably several times. It currently lacks citations, but those will be added. Citations will be minimal because this is not a scholarly work, but should still have basic source credibility. Additionally, this chapter contains assertions which some might call bold or unsupported. There’s a reason for that and the reason is that this is just the first chapter.

Subsequent chapters will explore the history of AI as a concept and technology, including the pre-history and how formal logic rules, early computers and language itself set the stage for a new form of data processing, which we call artificial intelligence. It goes on to explain the theory of deep learning and neural networks, why this approach won out, and will take readers all the way to understanding natural language processing.

It’s not designed to be easy or dumbed down. However, it is intended to be fully complete for any lay person to fully understand why AI is the way it is, how we got her and where it is likely to go next. It is also not entirely technical. It features historic concept, public perception, regulatory issues and economic realities.

I also want to add that there are a number of people out there who are writing hot takes on AI that are completely uninformed, and this is intended to be the opposite of that. It’s down to earth, skeptical and presents the truth about artificial intelligence.

The working title is “Understanding AI.”


AND HERE IT IS!

Continue reading

New Youtube Channel and Focus on AI

Artificial Intelligence has more mysticism than just about any other subject out there. I’ve never seen any subject so poorly understood and so sensationalized. It’s a technology that everyone seems to realize is big, revolutionary and important. But that’s only resulted in a huge amount of mythology.

Few people understand AI from a technical perspective, but just about everyone *thinks* they understand it, because it seems so intuitive. It seems like you can just talk to it and it understands, so the implications are obvious, right?

Right now the world has a deficiency in AI experts who understand the tech, and even fewer are mature risk managers. That’s resulted in a lot of skewing toward sensationalism. Most AI leaders are not even knowledge about the tech and the media rewards a high drama narrative. There are a few ways it skews: one of the most ridiculous messages is AI doomerism, the idea that AI might wipe out humanity. It’s cartoonish but it receives more attention than it should. There are also claims of permanent unemployment. On the other end is AI utopianism. There are also those insisting AI might become conscious or a moral patient. Yes, this is also being taken seriously.

It’s really a subject that attracts all kinds. But few people realize that like any technology, AI and ML have fundamental limits and capabilities. They’re not magic. But the recent AI summit in India would have you think otherwise, with ubiquitous claims of being close to superintelligence.

And so, as one of the few AI technical experts willing to address this problem I have launched a new YouTube Channel and will be focusing primarily on this topic. AI risks, mitigations, technology and truth: AI Sanity, on Youtube.

Where We Really Stand In AI Capabilities

The recent talk of AGI, as if it is some kind of impending certainty, and now talk about “Superintelligence” is really causing a great deal of confusion. The reality is that we are nowhere near the point of human level intelligence in all domains, the idea of artificial super intelligence, is entirely speculative and nowhere near foreseeable capabilities, and you can’t scale past the limits of current AI systems. The truth has been lost in a sea of sensational rhetoric.

The modern public discourse around artificial intelligence began with a fundamental shift in frame of reference. For decades, AI systems were narrow, technical, and largely invisible to the general public. Then, quite suddenly, natural language processing systems emerged with startling fluency. For the first time, people could interact with a machine through conversational language that resembled human dialogue.

This single development reset public intuition overnight.

Instead of being understood as statistical systems operating within defined computational constraints, large language models were immediately interpreted through the lens of science fiction archetypes: conversational minds, digital assistants, synthetic intellects. The resemblance in surface behavior was compelling enough to override the underlying reality of how these systems actually function.

But fluency is not cognition. Simulation of reasoning is not reasoning itself.

Continue reading

A Risk-Oriented Hierarchy of Intervention in the Deployment and Customization of Large Language Models

A practical and pragmatic discussion of the levels of risk and complexity in the customization of large language models. Many organizations are using LLM technology to build customized chatbots, RAG tools and content generators. However, many organizations do not have a full understanding of the options and levels of risk and development complexity that come from LLM customization and deployment.

In the contemporary landscape of artificial intelligence deployment, a structural shift is occurring: base models are becoming increasingly capable out of the box. Instruction-following performance, contextual reasoning, retrieval integration, and domain adaptability have improved to such a degree that many historical justifications for invasive model modification are steadily eroding. This evolution necessitates a corresponding philosophical and governance framework—one grounded in the principle that greater customization introduces greater uncertainty, greater liability, and a proportionally greater need for validation and risk controls.

At its core, the responsible deployment of large language models should be guided by a hierarchy of invasiveness. Each successive layer of intervention introduces deeper system coupling, increased behavioral unpredictability, and escalating regulatory, operational, and reputational risk. Accordingly, risk management should not begin at the level of model alteration, but rather at the least invasive layers of interaction and configuration.

Continue reading