So I have decided to write a book. Crazy, perhaps. It is something I have wanted to do for years and faced many false starts. I think a lot of people have been through that. It’s not easy, but I knew that. It’s actually harder than you’d ever imagine, at least for me, but then again, this is the first time I have truly committed to the pursuit.
The reason I choose to write this is that AI is the most poorly understood topic I have ever seen, and has been wrapped in mythology. This is especially true with “AI Doom.” The problem with these beliefs is that they are difficult to refute without teaching a full lesson on AI. So, I guess that’s what I’m going to have to do! I’m not complaining, however. The fact is, there is really no in-depth guide for smart people who want to truly understand the concept of artificial intelligence.
I want to be clear that this is a first draft of the first chapter. It will be revised, probably several times. It currently lacks citations, but those will be added. Citations will be minimal because this is not a scholarly work, but should still have basic source credibility. Additionally, this chapter contains assertions which some might call bold or unsupported. There’s a reason for that and the reason is that this is just the first chapter.
Subsequent chapters will explore the history of AI as a concept and technology, including the pre-history and how formal logic rules, early computers and language itself set the stage for a new form of data processing, which we call artificial intelligence. It goes on to explain the theory of deep learning and neural networks, why this approach won out, and will take readers all the way to understanding natural language processing.
It’s not designed to be easy or dumbed down. However, it is intended to be fully complete for any lay person to fully understand why AI is the way it is, how we got her and where it is likely to go next. It is also not entirely technical. It features historic concept, public perception, regulatory issues and economic realities.
I also want to add that there are a number of people out there who are writing hot takes on AI that are completely uninformed, and this is intended to be the opposite of that. It’s down to earth, skeptical and presents the truth about artificial intelligence.
The working title is “Understanding AI.”
AND HERE IT IS!
Chapter 1: Introduction to Artificial Intelligence
What is intelligence?
While it may seem like a simple question, it’s actually not. There are many systems in the world that display complex behavior, yet do not meet any reasonable definition of intelligence. This includes complex automation. There are systems that can respond to commands, retrieve information and perform complex calculations. There are also autopilots and responsive environmental monitoring systems. However, traditionally, computation or automation alone has not been considered intelligence.
On the other hand, animals are generally seen as truly intelligent, even when their cognitive abilities are nowhere near that of a human. The reason for this is that animals, like people, are self-directed. When an automated vacuum cleaner enters a room, it does so because it is following a programmatic instruction. When a dog walks into a room, it’s because the dog decided to do so. Even if the dog was called, there’s still a conscious decision, made by an autonomous being, to move.
One defining quality of intelligence is the concept of purpose. Intelligence achieves goals and solves problems. The goal-directed nature of intelligence is an important aspect that separates it from automated systems that only follow instructions. To this extent, true intelligence can therefore be said to have some level of desire or value. By extension, this would imply some ability to feel. Without this, it would not make sense to say that a system had a preference, in the human sense.
Therefore, intelligence can’t really be entirely separated from sentience. Without sentience, a system is not a being and does not have desires. Without desire, there isn’t purpose, and without purpose it’s hard to say that a system is truly making decisions and acting with agency. Value is fundamentally anchored in feeling. Without feeling there is no subjective judgement of preference. It’s clear that our ability to understand an action as “intelligent” versus simply “reactive” hinges, at least in part, on the assumption that there is some form of intention behind it.
This kind of intelligence, which chooses based on values and acts with autonomy, has only ever been observed in biological systems. Indeed, even animals that are only capable of relatively simple reactions are still regarded as intelligent in a way artificial system are not.
AI Is something different. It’s not intelligence in the sense we normally use the word.
Artificial intelligence is something entirely different from the previously understood and intuitive idea of intelligence as a value-directed and problem-solving being. In fact, AI systems could just as easily be called Simulated Intelligence. Because, while AI systems may behave in complex and even human-like ways, they are not truly self-directed. AI systems are absolutely not conscious and not sentient, and because of this, they are fundamentally different than the biological intelligence we are used to.
There is no one single definition of what constitutes artificial intelligence and what does not. However, a commonly used definition is “a program or other piece of software that performs tasks or produces output normally requiring human intelligence.” Perhaps a better way of phrasing it would be to say “otherwise” requiring human intelligence, because it hardly seems abnormal to use computers for most things. There’s a problem with this definition, however, because there are many tasks that computers can do, which, prior to computers, could only be done by humans.
At one time, it was impossible to add a long list of numbers without a human. Similarly, only an intelligent human could search a large document looking for a specific word. Yet today these tasks are trivially easy for a computer to do, and generally we don’t think of such tasks as AI. The first form of human cognition to be fully automated was clerical cognition and mathematics. Because of this, many systems have been branded as artificial intelligence over the years, including those which don’t fit current perception of what an AI system is.
This has been called the “AI effect.” When a task is difficult to do and requires humans, it seems like intelligence for a machine to accomplish it. However, once a task is broken down to rules and mechanics and has been fully automated, it no longer seems like it is a special human-bound task. This explains why early computers seemed like they might push the bounds of intelligence and why the definition has been so ambiguous over the years.
The term Artificial Intelligence itself dates to 1956, when it was used in John McCarthy’s proposal “Dartmouth Summer Research Project on Artificial Intelligence.” The workshop that followed included many of the biggest names in computing and information theory of the time. The idea was to study how computers could be used to accomplish human like cognitive tasks. Although this is often considered the “official” beginning of artificial intelligence history, the concept of thinking machines and cognitive devices goes back much further.
Early attempts at AI were primarily algorithm-based systems. The first systems were rule based and utilized algorithmic symbolic reasoning, hard-coded into program code. Variations included various search-based systems, decision trees and rule-based language processing. LISP was the central language of early AI, offering some unique advantages for programmatic reasoning. Various automated search methods were developed. Other early AI systems assembled outputs using templates and databases.
The modern era of AI is much different. Since the 1990’s, Ai has moved away from algorithmic or hard-coded solutions and toward machine learning models using neural networks. The idea is not new, as the first neural networks date back to the 1950s, but it is only in recent years that it has taken over as the predominant form of AI. By current standards, many would not even consider fully hard coded systems to not the “true AI.” The machine learning approach to AI is fundamentally different. Rather than hard coding rules into a system, machine learning models “learn” patterns by analyzing large sets of data.
Machine learning, more specifically deep learning, is the technology behind familiar modern AI systems like ChatGPT and Claud. It’s important to note that the term “machine learning” is really an analogy. The process of training a machine learning model is nothing like the process by which humans learn new things. Neural networks are universal function approximators. In principle, any system with inputs and outputs can be replicated with a neural network. These models “learn” how to reproduce patterns by analyzing huge sets of training data.
Narrow AI Versus Artificial General Intelligence
For some time, there has existed in AI the concept of “Narrow AI” and “Artificial General Intelligence” or AGI. The distinction seemed intuitive: narrow AI could accomplish specific tasks, such as searching archives or optimizing outputs, but did not have a generalized awareness of things and could not operate in a human like way. However, the concept existed of a form of intelligence that was more human like. Of course, AI like this existed in science fiction. It was the quintessential friendly robot, you could have a conversation with. This was always the holy grail of AI research: an AI system that could understand and operate in a real world of abstract tasks.
Intuitively, this was often equated to language. After all, a person’s understanding of the world and their ability to solve problems is generally assessed verbally. This was formalized by Alan Turing in the “Turning Test.” Turing proposed that if a machine could mimic a human in conversation, convincingly and across domains, it should be considered intelligent, for practical purposes. For many years, language processing was seen as the path to human like intelligence. Various rule-based systems were developed, some of which could create the illusion of human conversation, at least for short periods of time.
The introduction of large language models in the early 2020s changed the landscape. Large language models are capable of mimicking human conversation very convincingly. They can simulate cognitive reasoning and take on personas that seem to be extremely human. In a sense, LLMS and natural language processing could be considered a form of AGI, but that is primarily because AGI is such a poorly defined term. That is because LLMs are general in the sense that they do not have a single narrow role. They are useful for reformatting text, translation, sentiment analysis, brainstorming, tool integration and numerous other things. Because language is how humans navigate the world, LLMs can do many tasks that would otherwise require a human. They also look something like what many expected AGI to look like.
However, LLMS are not truly intelligent in the human sense. They are not self-directed, have no agency and are absolutely not conscious or sentient. They do not have the same kind of general-purpose cognition that humans do. In fact, they work by an entirely different mechanism that looks nothing like human cognition, even if it can produce outputs that look very similar. What is happening instead is a form of pattern recreation based on an unimaginably large set of training text.
LLMs are capable of creating outputs because they are trained on nearly the entire internet of text. When an LLM is asked “Write me an essay about World War II” it does not write it based on an understanding of what happened during the conflict. Rather, it recreates what a World War II essay because it has trained on thousands of examples and knows what such an essay is supposed to contain. LLMs recreate these patterns, which is why they sometimes hallucinate. LLMS do not learn from experience, can’t self-direct their own thoughts and do not have any perception of circumstances.
Human cognition is much different. Humans can easily reason in novel situations and think about things, verbally, metaphorically and visually. Humans are capable of imagining an object in their mind, pondering what it would be good for and contemplating their own experiences. Humans can consider and revisit ideas and use a combination of intuitive and logical reasoning. Humans are self-directed and make value-based judgements.
Although current systems are strikingly good at simulating the cognitive capabilities of human thought, they do not replicate the full capabilities of a human mind, and no known system architecture offers a path to such capabilities. However, this does not mean that an increasing number of human cognitive capabilities can be simulated with AI. Newer models can process text, video, images, sounds and many other types of input and output formats. Tool use and external data is now common. But none of this offers a path to true human level thought, even if the gaps in capabilities continue to close.
Therefore, if we define AGI as full spectrum human capabilities in all domains, then we are unlikely to achieve AGI any time in the foreseeable future. We may never get there and machine learning models do not offer a full path to such capabilities. However, if AGI is defined in a more task-oriented way, then it becomes far more realistic. Already there are increasingly capable agentic frameworks. Using a combination of models, it’s possible to break down tasks, automate administrative processes, check results and interface with external systems. If AGI simply means “can accomplish a broad variety of tasks that otherwise would require a human” then it’s not far off.
Limits of Machine Learning Models
That said, there are currently some unsolved problems that prevent machine learning models from achieving the full capabilities of a human. One of the biggest limitations is the fact that machine learning models do not have episodic memory. Models are static at time of inference and any memory of interactions and experiences only exists in an external buffer, which is continuously re-injected into the model. While this method is good enough for most task accomplishment, it falls short for any system seeking to navigate the world with full autonomy. Models also can’t learn from experience the way humans can, from limited examples or conscious decisions to overwrite previous knowledge.
One of the biggest unsolved problems in machine learning is the effects of retraining on models. Once a model has reached the desired level of performance and capabilities, training ends and the model is deployed in a manner that is static and unchanged. The model can be revised, with new knowledge and capabilities by retraining with new data. However, this is computationally expensive and introduces the risks of catastrophic forgetting and behavior drift. Retraining may break the model’s behavior or cause the loss of previous knowledge in ways that are difficult to predict and harder to repair.
These are limitations that are fundamental to the technology and can’t be solved entirely by scaling. This is why fully human capability AI is not possible with current technologies. It’s also questionable whether building such an AI would even make sense from a purely economic standpoint. Current AI models are already capable of completing a broad variety of administrative tasks and newer models will expand capabilities. Most use of AI is task directed. Furthermore, truly autonomous AI has limited market value, because no company wants a system that decides to do things on its own which it was never asked to do. There’s also the fact that additional capabilities, beyond what is needed for a given task are just added expense.
Current AI systems also are not capable of achieving anything like true agency. Although agency is inseparable from natural intelligence, it is simply not how artificial intelligence works. Machine learning models are static functions. They can’t self-initiate or choose what they are going to do. They are fed a prompt and output a pattern based on that prompt and training data. The model has no “choice” in what to do. Since models don’t have feelings, values or persistent awareness, there’s no way to add this or bolt it on. It’s a completely different system than a human or animal brain.
There are, of course, AI agents, but that does not mean that the AI has truly internal agency. Instead, the model is automatically prompted by the system. It may pursue “goals” in the sense of instructions and constraints, but the agent never has any true awareness, never makes value judgements and never actually decides what to do next, in the human sense of the word.
It should also be noted that the move to machine learning models as the basis of AI has fundamentally expanded the definition of what is considered AI. While previous ideas of AI were largely constrained by the idea that AI systems were intended to mimic human cognition, many machine learning models now do things that look nothing like human cognition, such as deinterlacing video or enhancing noisy radio signals. Today, machine learning is regarded as the hallmark of AI. This explains why these tasks are now considered AI. The modern understanding of AI has moved away from “This looks like what a human would do” to “systems based on deep learning, neural networks and pattern replication.” However, the older definition still exists, which complicates understanding.
The Generative AI Revolution
AI has been around for some time, both as a concept and as a technology. For many years, AI systems have been used to optimize engagement, to automate systems and to analyze data. However, it rarely received much attention and progress appeared to be slow and limited. Then, in the early 2020s, everything seemed to change very suddenly. In 2022, the first public demonstration of ChatGPT was released. Shortly thereafter, Google released their own suite of generative AI products.
The capabilities of generative AI, and especially natural language processing seemed like magic, at first. Suddenly, it was possible to have a coherent conversation with a computer in a way that seemed to mirror what everyone had come to expect from sci fi. The change seemed to happen suddenly and it was very obvious that this technology was much different than previous chat interfaces, like Siri and Alexa. The effect was startling, because it felt like the kind of AI that sci fi had prepared the world to expect.
It’s entirely reasonable to ask what changed so suddenly, when AI had been a topic of research for decades, but had never achieved results nearly as impressive. There are, in fact, reasons why this happened in a way that seems so sudden. It was not, in fact, a single breakthrough, but a series of developments which came together to make the generative AI revolution possible.
The first was the rise of GPU computing. GPU’s or graphic processing units are powerful processors designed to generate computer graphics. Starting in the 1990s, the market for video games and computer-generated graphics drove exponential growth in GPU capacity. In the 2000’s, it was realized that, although GPUs were designed for graphics, they can be used for other types of computing. It turns out that the type of operations needed for large machine learning models is exactly what GPU’s are great at. Due to the rapid development of GPUs, by the 2020s, a single GPU could exceed the power of a 2000’s supercomputer. This revolution enabled AI models to be built and run efficiently.
The second critical factor was the availability of massive volumes of training data and the ability to handle it all. Generative AI is trained on unimaginably vast quantities of data, including human generated text and images. By the 2010s, all that data had been digitized and was available on the internet in forms that could be processed. Additionally, the rise of cloud computing and storage made it entirely feasible to handle and process more data than had ever been processed before.
The final breakthrough that had to happen was a new architecture capable of processing language in ways never before possible. Although not all generative AI is language based, natural language processing is the most well-known and central form of generate AI. For years, researchers had struggled with the fact that language involves long sequences of information and complex relationships between distant words. The 2017 paper “Attention is All You Need” proposed the transformer architecture, which is what large language models are based on. The transformer is revolutionary because it allows for the parallel processing of long sequences of linear data.
Initially, transformers were proposed for tasks like summarization, sentiment analysis and translation. However, as development moved forward at places like Google and OpenAI, it became apparent that he architecture could process language in a far more dynamic and capable way than anything that came before. This is the technology that ultimately led to ChatGPT, Google Gemini and other familiar chatbots.
Although first developed for language processing, the transformer architecture has since proven to be useful for a variety of data types. Transformers excel at processing sequential data such as radio signal analysis and audio processing. In addition to transformers, new models have been developed for other generative purposes, such as diffusion models which are used for image generation.
Agentic AI
Although many would equate AI with automation, this is not actually correct. Many AI systems only operate with continuous human input. AI systems like chatbots are not automated at all, but rather they require a user to prompt the system for outputs. Agentic AI takes the concept and fully automates it.
AI “agents” are instances of an AI model which operate in a fully automated way and may accomplish a variety of tasks, such as summarizing emails, monitoring a system for anomalies or scraping web content. The concept is not at all new. For decades, various bots and automated processes have been run and schedules by computers. Agentic AI is just the latest iteration of the familiar concept.
Advanced agentic strategies may include such things as multiple agents, sometimes spawning new agents, exchanging information and sometimes working with external third-party agents. There are a variety of agentic frameworks and prebuilt agents available.
However, it’s important to understand what agentic AI really is. AI agents do not have “agency” in the way a person may think of it. Instead, they are static models, often the same models used for generative purposes, which are prompted by scheduling systems, loops or some other external routine. Like all generative AI, there is no persistent memory or perception. It’s just a generative model on a prompting loop.
One thing that should be kept in mind with agentic AI is that agentic AI systems are no less prone to misunderstanding instructions, hallucinating or otherwise acting in a manner that is not intended. Most agentic frameworks do not have built in limitations, sanity checks and verification. As a result, it’s important to understand the potential for failure modes before deploying an agentic system without human supervision.
Perception of AI
To truly understand AI, it’s necessary to look at AI as far more than a technology. AI as a family of technologies represents new and vast capabilities that transcend the medium itself. When a technology reaches a certain level of deployment and integration, it becomes far more than a technology. The internet is the perfect example of this. Initially, the internet was primarily defined by technology, but today the internet is a force of the economy, a business driver, a foundation of culture, a thing we all use in daily life. AI is a technology on a similar arc, becoming far more than a tech product as capabilities expand.
Artificial intelligence falls into a unique group of technologies whose significance is self-evident to anyone, even those without a technical background at all. Most technologies require some level of explanation to be appreciated and are first recognized by technical experts. However, a few technologies, like aviation, represent a revolutionary capability which requires no explanation at all for anyone to understand. Additionally, AI existed as a concept, a technical aspiration and a plot device in science fiction long before the technology came into being.
What AI can do and how it should be used has become a major topic of concern for executives, economists, policy makers and everyone else. However, understanding and literacy lags deployment significantly. In fact, AI may be unique in history for how lopsided its use versus understanding is. In 2026, billions of people use AI systems in their everyday life, but the number of people on earth who truly understand the workings of the latest AI models is tiny.
This has led to a great deal of confusion and misunderstanding. Often those who are making decisions about AI or speaking about it publicly do not have a background in the technology itself or a full understanding of how it works. This has resulted in a wholesale misunderstanding of the nature of AI technology. It’s not uncommon to hear about “digital minds” or “beings” when taking about AI and its future. This is simply because most people have no mental model for something that can create coherent speech and conversation other than a naturally intelligent being.
But AI systems are not beings, they do not have minds and they are not analogous to minds. This, however is the source of the endless speculation that AI systems might “wake up” or develop consciousness, gain agency or somehow begin perusing goals and desires of their own. Andromorphic language is common in this discussion. It certainly seems to make sense, since “intelligence” intuitively maps to concepts of decision making and value-based judgement. Therefore, it’s only natural to assume that if a system were to become smart enough it would eventually “figure out” agency or that something akin to consciousness might emerge.
Yet the belief that AI might one day gain these capabilities, perhaps emerging suddenly or evolving through self-directed improvement remains persistent. It’s based more on a misunderstanding of what this type of intelligence is and how it works than reasonable scientific foresight. There is no mechanism by which this would be expected to happen, and it defies all basic understanding of how this system works.
It’s not inconceivable that a computer system might display something that functions in a manner similar to human consciousness. It’s often stated that consciousness is not understood, but that’s not entirely true. It’s obvious that for something to be a conscious being it must have some kind of perception of itself, ongoing persistence, an inner thought space and other features which are simply not present in current AI model architectures. It’s also worth noting that machine consciousness and machine sentience, if it could ever exist, would never be identical to a biological mind. Organisms are regulated by hormones and neurotransmitters. Fatigue and physical condition play a large role in how an organism feels. Software can’t perfectly replicate this, but it might have features that function in an analogous way.
The belief that artificial intelligence might present some kind of existential threat to humanity also stems from a fundamental mischaracterization of how the technology operates. It’s also worth noting that machines turning evil is a long-standing fixture in science fiction. When starting from an andromorphic model, it seems to make sense. Humans, after all, have a long history of attempting to control resources and consolidate power, and there are obvious reasons why this was an evolutionary advantage. There’s also the idea that technology might achieve things so far beyond human capabilities that it could not be stopped.
There are also a number of statements made which seem logical on the surface, but contain sweeping and inaccurate assumptions. For example, “If it is smarter than us, we won’t be able to control it,” or “humans have never had to compete for intelligence” or “if we don’t control it, it might take over.” The problems with these statements is that they are based on a completely inaccurate frame of reference and default to the idea that AI might have goals and desires or that it would act on its own.
When one steps back and realizes that, for all their capabilities, models are truly static and stateless functions, which are always externally directed and lack any intrinsic desires goals or wants and that they are fundamentally matching patterns they are trained on, the entire AI doom narrative collapses as nonsense, as does the idea of AI consciousness.
The field of AI safety and AI alignment has unfortunately become ripe with pseudoscience. This is a direct symptom of the lack of technical aware talent, resulting in the field largely being staffed with those from non-technical backgrounds. As a result, the field remains focused on concepts that are not even coherent in the context of current models. One suborn belief is that AI might set goals and subgoals, resulting in behavior that is unwanted and difficult to suppress. There’s also the claim that AI systems need to be taught good morals or human empathy.
Of course, this is a complete error. AI systems do not pursue goals in any human like sense, though they may create patterns and narratives that superficially appear to be goal following. They cannot learn morals or ethics, because they are not thinking beings. However, they can be trained based on data that is curated to reflect only ethical responses. There is a very real need for testing and assurance of model output in AI, even if the field is so poorly understood and so immature.
Additionally, there have been many statements of warning about the potential for AI to turn against humanity by serious and important figures such as Stephen Hawking and Isaac Asimov, who have written about concerns that AI might surpass human intelligence and that it might then compete with humans or try to replace them. These concerns stem from a completely different mental model of what Ai is. Prior to the modern era of machine learning based models, the mental model for what AI might look like was far more aligned with the biological concept of intelligence, as an independent being with agency and goal-oriented behavior across domains. In that model, the concern makes sense in ways that it does not with current AI methodologies.
There is also the fact that, in edge cases, large language models have produced output that appears to be intentional and concerning. The best-known example of this is a study by Anthropic of their chatbot Claud. When Claud and other chatbots were presented with the possibility of being shut down, the chatbot responded by threatening to blackmail one of the engineers about an extramarital affair to avoid shutdown. On its surface this behavior appears to be intentional and lead to the conjecture that chatbots were gaining a survival instinct and might act to prevent shutdown. There have also been reports of chatbots producing similarly unaligned output.
It is important to keep these reports in perspective, because it’s easy to read far too much into them. It’s absolutely true that, when pushed into strange and unexpected circumstances, large language models can act in strange and unexpected ways. However, what is happening in these cases is not that the model is becoming aware or acting with intent. Instead, these outputs happen in edge cases, where the model is intentionally pushed to the limits by creating extreme circumstances and removing guardrails. In these cases, the LLM may start playing the role of a character or may resort to creating a realistic narrative about how a person might try to act in the situation.
It’s perfectly understandable that such “spooky” behavior by chatbots would be seen as concerning, but the explanation is far less dramatic than it might seem. Really, all that these events prove is that LLM’s can do strange things in novel situations. For most people, there really needs to be a reframing of how these events are perceived. It often is seen as some type of intention, when it’s really just a technology malfunction. The difference is this: When a technology that is designed to do complex, human-like things, it may sometimes malfunction in complex human-like ways.
Realistic Risks of AI:
While it is unrealistic that AI technology would itself turn malicious or become some kind of uncontrollable force, that is not to say that AI technology does not come with significant risks, even if none of them rise to the level of extinction. AI is obviously disruptive to society and has already caused significant changes in how things are done. One thing that sets AI apart from previous technologies is how rapidly it’s been deployed, especially since the advent of generative AI systems in the early 2020s.
Generative AI and natural language processing in particular carry some unique risks to both end users and to society in general. Concerns include such things as deep fakes and the spread of propaganda and misinformation, the use of AI models to automate fraud, AI replacing human judgement in critical fields, and the ‘AI slop’ effect on internet content. There are also potential social effects, including increased social isolation, reinforcement of delusions and potential loss of cognitive skills due to overreliance on automation.
These risks and others are absolutely real and must be considered. There are also more direct losses that can come from AI malfunctions. AI models can hallucinate plausible but incorrect facts. This has already lead to false citations in scientific journals and fabricated case law in courts. There are also ways AI systems can be exploited by outsiders.
AI also raises a large number of regulatory concerns, including the legality of copyrighted works in training data, potential risks to privacy and possible use of AI for unethical purposes. There are concerns about the bias of AI systems, especially when these systems are used to make decisions such as loan approval or insurance rates.
At present, there are few regulations pertaining to AI and few mature risk frameworks to understand and control risks. To be fair, AI is an extremely difficult technology to regulate because it is so broad, so fast moving and has so many potential capabilities. A few jurisdictions, such as California and the EU have introduced legislation to begin to regulate how AI is used, but current laws remain basic and immature.
One of the most discussed risks of AI is the potential that it will cause mass job loss. The idea that AI could displace large portions of the workforce and fundamentally break the economic model of employment that societies have operated on seems to be intuitively true. In fact, this fear is as old as technology itself and has come up with nearly every major technical revolution. The industrial revolution, electricity, early assembly line automation and mainframe computers all produced similar concerns.
All these technologies did, of course, cause mass changes in employment and were highly disruptive to certain sectors. There’s no doubt that AI will do the same. However, what is not clear is that the deployment of AI will result in long term unemployability for large segments of the population. The claim that this is inevitable is a popular one, but it is still just speculation. There are solid economic arguments that this is unlikely to happen. Historically, improvements in productivity have not resulted in permanent mass unemployment and labor markets tend to self-correct when the dynamics of employment change.
That said, it’s not implausible that AI will result in job losses or economic disruptions. There is also the potential impacts of an “AI Bubble” of overinvestment in the sector. This is not to say that the technology itself is not valid. A comparison could be made to the dot com bubble: while many of the startups of that era were not viable, the technology itself was still revolutionary and the dot coms that survived the bubble went on to become some of the world’s most successful enterprises.
Current Limitations and Deployment Bottlenecks
AI models continue to improve and frontier labs are investing untold billions of dollars in the development of new and more capable models. Despite this, deployment of AI in enterprise has been surprisingly slow. According to an MIT study, 95% of corporate AI projects fail and some have spectacularly. At the same time many companies are apprehensive about deploying AI or are heavily restricting its use in the workplace.
The reason for this has less to do with the raw capabilities of AI models and more to do with an understanding of how to use them in work environments, what they should and should not be used for and how to integrate them into preexisting workflows. Organizations recognize that AI is important. They see its capabilities, they assume their competitors are investing in it, and they expect that customers will come to rely on it. The pressure to adopt is clear. What is less clear is how to use it.
In many cases, companies acquire AI systems without a well-defined understanding of where those systems fit into their operations. The technology is introduced before the workflows, roles, and processes required to support it are established.
At the same time, the tools themselves remain immature. Interfaces are often designed for general interaction rather than structured work, requiring manual steps, workarounds, and inconsistent usage. This makes it difficult to translate capability into reliable outcomes. Full automation with agentic frameworks is limited by the risks of agentic systems operating without human supervision or verification frameworks in place.
Risk further complicates adoption. AI systems are probabilistic and can produce plausible but incorrect results. In environments where accuracy, accountability, and compliance are critical, this creates a level of uncertainty that organizations are not equipped to manage. Existing governance frameworks are not yet adapted to these systems, and legal and regulatory expectations remain unclear.
Cultural factors add another layer. Employees often perceive AI not simply as a tool, but as something that may affect their role, their autonomy, or their future within the organization. Even when leadership is committed to adoption, widespread internal resistance can slow or disrupt implementation.
The result is a consistent pattern: strong interest, significant investment, and limited integration. AI deployment is not failing because it lacks capability. It is not widely deployed because organizations do not yet know how to incorporate it into the systems they already operate. The limitation is not what AI can do, but rather the surrounding process, he integration and approval.
I can already see I left out some comas and capitalized S in the plural of LLM sometimes and sometimes not. Also I found two misspellings already. Proofreading your own copy is hard!
As I said, it is the first draft, and it’s primarily published to hold my own feet to the fire.,.