First Draft of First Chapter of My Book “Understanding AI”

So I have decided to write a book. Crazy, perhaps. It is something I have wanted to do for years and faced many false starts. I think a lot of people have been through that. It’s not easy, but I knew that. It’s actually harder than you’d ever imagine, at least for me, but then again, this is the first time I have truly committed to the pursuit.

The reason I choose to write this is that AI is the most poorly understood topic I have ever seen, and has been wrapped in mythology. This is especially true with “AI Doom.” The problem with these beliefs is that they are difficult to refute without teaching a full lesson on AI. So, I guess that’s what I’m going to have to do! I’m not complaining, however. The fact is, there is really no in-depth guide for smart people who want to truly understand the concept of artificial intelligence.

I want to be clear that this is a first draft of the first chapter. It will be revised, probably several times. It currently lacks citations, but those will be added. Citations will be minimal because this is not a scholarly work, but should still have basic source credibility. Additionally, this chapter contains assertions which some might call bold or unsupported. There’s a reason for that and the reason is that this is just the first chapter.

Subsequent chapters will explore the history of AI as a concept and technology, including the pre-history and how formal logic rules, early computers and language itself set the stage for a new form of data processing, which we call artificial intelligence. It goes on to explain the theory of deep learning and neural networks, why this approach won out, and will take readers all the way to understanding natural language processing.

It’s not designed to be easy or dumbed down. However, it is intended to be fully complete for any lay person to fully understand why AI is the way it is, how we got her and where it is likely to go next. It is also not entirely technical. It features historic concept, public perception, regulatory issues and economic realities.

I also want to add that there are a number of people out there who are writing hot takes on AI that are completely uninformed, and this is intended to be the opposite of that. It’s down to earth, skeptical and presents the truth about artificial intelligence.

The working title is “Understanding AI.”


AND HERE IT IS!

Continue reading

We Need To Take AI Doomerism Seriously

No, not the actual belief. The idea that AI will become a cartoon supervillain and wipe out humanity is as idiotic as it sounds. The danger is that the belief is getting credibility, is taken seriously and is a dangerous red hearing. The grifter economy of AI doom is a self-serving scam, but the consequences are real.

I personally find AI Doom to be more than a nuisance. Most do not realize this but, when you peel away the curtain, the movement is actually based on a strong cult-like belief and has spawned some extremely disturbing rhetoric. There have been threats against AI labs and ridiculous proposals for legislation to pause or stop AI research. There are all kinds of ridiculous claims being made and they are getting media attention> Almost nobody seems to be aware of the true nature of this ridiculous idea.

Most AI experts have disengaged from the nonsense of AI doom. After all, it’s not like it’s interesting and nobody wants to have to deal with their area of expertise being stepped on by people who don’t know what they’re talking about. However, this is dangerous. Doom movements have grown around other technologies: nano technology, vaccines, genetic engineering, nuclear energy and others. What we know is that these movements, unhinged and unsupported though they may be, do not go away and frequently lead to bad legislation and major problems for industries that do not fight back.

AI doom is an especially prevalent threat and it’s receiving mainstream legitimacy and attention, which should be seen as a major problem in and of itself.

What is AI Doom

AI Doom is the basic idea that there is some kind of unique and existential threat to humanity, based on the idea that AI systems and ML models might become “super intelligent” and therefore impossible to stop from causing harm. This is also predicated on the idea that in addition to intelligence being a scaler, which can be arrived at through increased compute, that the resulting intelligence would somehow achieve autonomy and become either hostile to humans or desire to eliminate humans to gain more resources.

It’s not an entirely new idea. It’s been a trope in science fiction for some time. It is based on a lot of universal fears, like not understanding technology, being replaced, dehumanization and the fact that people have been so conditioned by science fiction to expect such an outcome to be reasonable.

Doom literally refers to an “end of the world” scenario, but there are other doom adjacent beliefs and claims, such as the idea that AI will lead to a permanent dystopian society where employment is impossible and power is consolidated or that AI may enslave humanity.

Importantly, while there are absolutely risks of a variety of types that are associated with AI adoption, the idea of a species-level risk from the technology gaining self-motivation and setting its own goals is not plausible at all.

Continue reading

New Youtube Channel and Focus on AI

Artificial Intelligence has more mysticism than just about any other subject out there. I’ve never seen any subject so poorly understood and so sensationalized. It’s a technology that everyone seems to realize is big, revolutionary and important. But that’s only resulted in a huge amount of mythology.

Few people understand AI from a technical perspective, but just about everyone *thinks* they understand it, because it seems so intuitive. It seems like you can just talk to it and it understands, so the implications are obvious, right?

Right now the world has a deficiency in AI experts who understand the tech, and even fewer are mature risk managers. That’s resulted in a lot of skewing toward sensationalism. Most AI leaders are not even knowledge about the tech and the media rewards a high drama narrative. There are a few ways it skews: one of the most ridiculous messages is AI doomerism, the idea that AI might wipe out humanity. It’s cartoonish but it receives more attention than it should. There are also claims of permanent unemployment. On the other end is AI utopianism. There are also those insisting AI might become conscious or a moral patient. Yes, this is also being taken seriously.

It’s really a subject that attracts all kinds. But few people realize that like any technology, AI and ML have fundamental limits and capabilities. They’re not magic. But the recent AI summit in India would have you think otherwise, with ubiquitous claims of being close to superintelligence.

And so, as one of the few AI technical experts willing to address this problem I have launched a new YouTube Channel and will be focusing primarily on this topic. AI risks, mitigations, technology and truth: AI Sanity, on Youtube.