We Need To Take AI Doomerism Seriously

No, not the actual belief. The idea that AI will become a cartoon supervillain and wipe out humanity is as idiotic as it sounds. The danger is that the belief is getting credibility, is taken seriously and is a dangerous red hearing. The grifter economy of AI doom is a self-serving scam, but the consequences are real.

I personally find AI Doom to be more than a nuisance. Most do not realize this but, when you peel away the curtain, the movement is actually based on a strong cult-like belief and has spawned some extremely disturbing rhetoric. There have been threats against AI labs and ridiculous proposals for legislation to pause or stop AI research. There are all kinds of ridiculous claims being made and they are getting media attention> Almost nobody seems to be aware of the true nature of this ridiculous idea.

Most AI experts have disengaged from the nonsense of AI doom. After all, it’s not like it’s interesting and nobody wants to have to deal with their area of expertise being stepped on by people who don’t know what they’re talking about. However, this is dangerous. Doom movements have grown around other technologies: nano technology, vaccines, genetic engineering, nuclear energy and others. What we know is that these movements, unhinged and unsupported though they may be, do not go away and frequently lead to bad legislation and major problems for industries that do not fight back.

AI doom is an especially prevalent threat and it’s receiving mainstream legitimacy and attention, which should be seen as a major problem in and of itself.

What is AI Doom

AI Doom is the basic idea that there is some kind of unique and existential threat to humanity, based on the idea that AI systems and ML models might become “super intelligent” and therefore impossible to stop from causing harm. This is also predicated on the idea that in addition to intelligence being a scaler, which can be arrived at through increased compute, that the resulting intelligence would somehow achieve autonomy and become either hostile to humans or desire to eliminate humans to gain more resources.

It’s not an entirely new idea. It’s been a trope in science fiction for some time. It is based on a lot of universal fears, like not understanding technology, being replaced, dehumanization and the fact that people have been so conditioned by science fiction to expect such an outcome to be reasonable.

Doom literally refers to an “end of the world” scenario, but there are other doom adjacent beliefs and claims, such as the idea that AI will lead to a permanent dystopian society where employment is impossible and power is consolidated or that AI may enslave humanity.

Importantly, while there are absolutely risks of a variety of types that are associated with AI adoption, the idea of a species-level risk from the technology gaining self-motivation and setting its own goals is not plausible at all.

No, It Did Not Arise Organically. It was Never Organic

One of the most surprising things to most is the fact that the AI Doom movement did not happen organically. It was not the result of independent researchers and experts coming to the conclusion that this was a real risk. It was not the result of any kind of expert review following evidence. Nothing at all like that. The modern AI doom movement is the direct result of an organized cult-like belief which traces its roots to the early 2000s.

Eliezer Yudkowsky stands out as the original cult founder. He began his strange public life in the early 2000s. It’s unknown exactly what his background is before this. He did not graduate high school, which to begin with, should indicate something went wrong. He began to craft a personal brand of being some kind of enlightened leader and visionary and began writing long self-referential manifestos and founded a community around “rationalism.”

Rationalism may sound great, because being rational surely makes sense, but in this case the belief quickly grew into a cult like self-reinforcing group. In fact, rationalists are a very irrational group, a kind of immature group who claims to own rational discourse and are obsessed with the idea of AI Doom. Yudkowsky was elevated to the level of profit and began to get a large following. The movement spawned a number of seemingly normal, secular groups, but when examined are far more fanatical than they seem.

Yudkowsky is, in fact, a charismatic cult leader. His ego is massive and fragile, his entitlement is extreme and his real achievements are low. He’s just one of several leading figures who has cashed in on this movement. They’ve used it for self-promotion and their message is simple and effective, in the way cult leaders always are. They have even managed to cash in on this by selling a pseudo-intellectual book spouting nonsense like it’s somehow reasonable and smart.

These groups have had a long fixation on the idea of humanity being on the verge of a technical revolution of cosmic significance. The entire idea arises as a result of interest in transhumanism. One of the first cult groups was the Singularity Society and this was followed by the Machine Intelligence Research Institute. None of these think tanks and non profits, actually conducted research. This ended up gaining traction in silicon valley as a classic “tech cult” and managed to gain influence in other areas.

The entire movement, based on magical thinking of human cosmic manifest destiny, transhumanism, struggles against technology and “rationalism” because highly popular in parts of silicon valley and gained a lot of financial support. It became a strange subculture and spawned additional organizations. A disturbing subculture has arisen based on these and other fringe beliefs.

The cult like aspects of the movement are actually quite frightening. Eliezer Yudkowsky himself has called for the bombing of data centers. At least one splinter group has been implicated in homicide. Doomsday cults are known to be unstable and potentially dangerous. The degree to which a handful of cult leaders are followed and quoted is also unsettling. In other cases, splinter groups have already resorted to violence.

There is also the fact that the movement, as one would expect with cults of this type, is extremely ugly in other regards. It has endorsed a large number of fringe and extremist beliefs and there are accusations of bizarre sexual beliefs and harassment. There are all the frightening and ugly things you would expect from a movement that is ultimately founded at least on social control. But it’s vital to understand that there are multiple motives at work here: while some clearly are falling into the trap of truly believing this, there are many others who are simply profiting from it.

There is clear appeal here, as there always have been with doomsday beliefs. Whether it is Y2K or a more religious cult, it offers a sense of purpose and community. There are other aspects that should stand out here. Transhumanism repackages the belief in life after death and immortality. Religious beliefs show up here, even if they are wrapped in secular and pseudoscientific language. Ironically, this cult coalesces around a community called “Less Wrong” but in fact, it’s a fanatical and toxic group of adherents.

Although the movement predates the modern generative AI era, it saw this as an opportunity to really cash in and it absolutely has.

AI Doomerism has become extremely lucrative and a large number of organizations have been springing into existence while many have been making money off of it or advancing their careers. In fact, most non profits that are related to AI policy are complete cranks and seem to be making quite a bit of money being cranks.

More info on the cult aspects of this bizarre belief system can be found at AI Panic.

The history and aspects of the cult of AI doom deserves its own post, as it is disturbing and extensive

Yes, of Course, it’s Complete Nonsense:

It should go without saying, but unfortunately it does not, that the idea that AI will become some kind of god and replace humanity, become hostile and ultimately be a threat of extinction is complete nonsense. There’s no plausible path to anything like that happening. No current technology or anything remotely speculative actually has any possibility of anything remotely like what is being claimed. Nobody who is a serious developer of AI models takes this seriously.

The entire thing is built on a series of illogical and sweeping assumptions, a great deal of hand waving, malformed ideas and complete category errors. It’s hard to refute, because it wraps itself in plausible sounding languages and attempts to leverage institutional status or fear and seemingly logical statements.

AI doom is wrapped in strawman arguments like “Haven’t we made mistakes before” “We should be extra careful.” “Humans have always been the smartest but what happens when we are not?”

But peering past the pseudo-intellectualism, the facts are a lot less exciting. The concept of superintelligence, which these ideas are based on, is fundamentally flawed. You can’t achieve true cognition through machine learning. ML systems are not capable of learning from limited experience and have no ability to integrate memories. ML models are stateless. While it may sound appealing to think they can just self-improve to infinity, the real world does not work like that.

There are many other reasons why these claims trigger nothing but eyeroll in those who understand the systems. AI systems do not evolve under scarcity. They lack all cognitive autonomy. They do not form goals or desires. They do not act on their own. They do not form plans. They can’t. It’s inherent to the system and how the architecture works. It would take pages more of information to explain the numerous reasons why what they are claiming is absolutely impossible, but it absolutely is.

Yes, there will be a very well sourced and technical breakdown of all the reasons it is impossible.

Generative AI and Natural Language Processing Shocked People:

The concept of AI has been around for a long time, and most people are familiar with it as a buzzword or marketing hype. However, for many years one thing was clear: AI could be used for non-human narrow tasks, but it had nothing that looked like the general purpose intelligence that a human has, including the ability to navigate complex social situations and understand metaphor. This was always said to be the domain of “Artificial General Intelligence” – the kind of friendly robot that we see in sci fi.

Language modeling and generative AI changed all that. Few people outside the research and development community had ever experienced natural language processing before 2023, and the capabilities legitimately startled people. Suddenly, it was possible for a computer to have nuanced, complex and seemingly intelligent conversation. This was something that had always been predicted to be the hallmark of “true” intelligence.

It’s important to note that the ability to process words and produce coherent responses is not the kind of multi domain that human cognition entails. It does not even come close. NLP (Natural language processing) is prone to errors and has difficulty with long and complex instructions. It’s not human-level capacity at all. but it can sure seem like that sometimes. It’s understandable that the public would become confused and wonder if this meant something like human scale intelligence was possible or close by.

The narrative has been simplified to the idea that AI burst onto the scene and is making amazing gains toward some mythical “superintelligence.” Because these capabilities are so amazing, peoples base levels for what they believe have been perturbed. Most people have no idea how the technology works or what its limitations are. To be fair, the concepts are new and complex.

It’s been hard for those outside the field to fully understand what is going on. There’s no base level to know what to reject as nonsense. The technology is so unfamiliar that anything seems plausible. Additionally, the nature of the technology and its human-like capabilities can be unsettling. It brings up natural questions like “what if it becomes more powerful and wants to do bad things.” For most people, this question seems entirely reasonable. There’s also the obvious and age old concern about automation replacing jobs.

People look at AI like it’s magic, and that’s made it a ripe place for grifters.

VERY Few Fully Understand the Technology:

It’s important to note that the technology driving the current growth of generative AI is extremely new and it’s unique, even within the field of IT and software. Traditional computer science educations focus on programmatic logic, algorithms and data structures. This is how most software is created and 99% of IT professionals have worked in this world.

Modern AI is based on machine learning. Generative AI, such as ChatGPT and image and video generators are pattern replication engines. They are trained on vast amounts of data and created through statistical analysis. They’re based on neural networks and deep learning. Things like Large Language Models do not achieve their capabilities strictly through design, but rather by modeling how language is used.

Deep learning has always been a very niche field of computer science. Generative AI systems like ChatGPT are NLP systems (natural language processing) and use LLMs (large language models.) Prior to 2022, language modeling was a very specialized area of research, known by very few. The mechanics of how NLP and deep learning in natural language systems work is a specialized area of study and practice, very distinct from other areas of computer science, and with few specialists interested until very recently.

At present, the number of people who have a deep technical understanding of LLMs is likely only a few thousand persons in the world. Since the introduction of generative AI, there has been a huge amount of interest, and the number of individuals who have become experts in the technology is always growing, but the total number remains small, especially in light of the number of self-proclaimed experts.

To be fair, the theory and technology of deep learning and natural language processing is not simple. AI systems do not think the way a human does and understanding what is going on under the hood requires building new mental models of how decisions are made and how logic can flow. It’s counter-intuitive. To fully understand deep learning, it is necessary to have some grounding in both math and computer science.

It’s not easy to gain the knowledge either. The technology and excitement have outpaced the ability of universities to build courses. Only a couple of universities offer fully built and mature programs in deep learning and NLP, and even those have limited capacity. Most of the experts in the area of NLP, LLMS and deep learning had to work in the development of the technology to gain an understanding of it. There are few technical classes available for continuing education and reskilling.

This has resulted in a technology that is used by billions and truly only understood by a tiny number of people. Importantly, most of those who have a deep technical understanding of generative AI are working in technical fields. Few have had the chance to rise to the level of administration or leadership. As a result, the fields of AI safety, AI alignment and AI policy are populated by people with no understanding of the technology or its limits and capabilities.

This absolutely includes the leadership of these companies. Rarely, if ever, is public comment made by a true technical expert. Most AI CEO’s know a great deal more about venture capital than they do the technology itself. Most AI experts are busy in the development lab.

It’s Legitimately Not Easy and That Seems to Stop Most People:

Becoming an expert in the technology of AI is by no means easy, but it’s also far from impossible. For me, it took about two years of intense study if academic-level material on machine learning and natural language processing. The technology is complicated and its entirely unfamiliar to those who do not have any background in computer science or information theory.

For those without a technical background, it may be an unreasonable ask to expect that they would understand much about the underlying technology. However, this rarely stops people from commenting on it. The technology came onto the world fast and many want to be part of it, but it’s unappealing to put in a great deal of effort to truly understand it.

The thing that makes AI technologies different than other areas is that they are so poorly understood and framed in society that it’s all but impossible for anyone to make much of an informed comment. As a counter example, a person does not need to understand how an internal combustion engine works to understand what cars do, what cars are good for and what the social implications of cars are.

AI had not reached this level of common sense understanding. Without some kind of background in the theory, the average person has no ability to fully understand what the capabilities and limitations of AI are, where it is likely to go, the challenges in development and deployment and why it is good for some things and not others.

It would be extremely hard to fully understand the technology for those who are not grounded in math, computer science and information theory. But, at this point, technical expertise is critical for understanding the risks and potential. But for those who don’t know, it isn’t obvious that much of the speculation is baseless.

Then again, you could also call it being lazy, which is exactly what I call it.

But Many Feel Confident to Comment on Artificial Intelligence:

One of the biggest problems in the discourse over AI is the lack of legitimate expertise and the huge number of self-proclaimed experts who have become influencers in the field. This technology seems to be especially prone to attracting people who don’t understand it, but feel very confident in their ability to predict its path forward and impacts on society.

There are a few things about artificial intelligence that have made it an especially potent area for self-proclaimed expertise. There has long been a problem of social media influencers, Ted talkers, pop book authors and other commentators giving shallow hot takes on things they don’t really understand. This just happens to be the worst case in recent history, partially due to the speed of growth and excitement in the sector and partially because of the lack of legitimate expertise.

There is another side to this. Not everyone who has been commenting on the subject is doing it entirely for personal gain.

While AI technology is complex and cutting edge, it’s unique in how approachable it seems at first glance. Large Language Models are not truly intelligent, but they often seem to be surprisingly aware. The technology is interacted with by talking to it. Many of the capabilities seem obvious. To many the basic impression is “It can do stuff like a person can.”

This combined with alarmism and a general feeling that the technology is revolutionary has lead many to make sweeping assumptions and to make comments, and even give lectures on what they think of the technology. Individuals like the late Henry Kissinger were asked to comment on it, despite him being 100 years old.

This might, at first, seem to make sense. If the technology is so profound in its implications, then shouldn’t we ask military strategists, ethnical philosophers, social scientists, historians, economists and others to weigh in? It’s not that those are not important aspects of the discussion, but the individuals giving lectures and interviews are often so ignorant of the subject matter, they make basic errors in understanding.

There have been lectures and events where historians (who will go unnamed) speak of how big and revolutionary AI is and then make sweeping statements filled with errors and misunderstandings. These severe misunderstandings are likely to continue because there just is no appreciation that if you have no clue how the system works, you really can’t speculate about what the economic or social implications are.

The problem is policymakers, journalists and executives tend to be the least well informed. They also tend to be of the mindset that those who are experts are “too young” or the “nuts and bolts people” and “I’m not interested in the technical side.”

But again, if you know nothing about the capabilities and limitations of the tech and only talk to others who have a 100,000 foot view, you will get the wrong 100,000 foot view. It’s very unsettling to see a round table discussion of self-proclaimed experts making baseless speculation about what they don’t understand.

Non-Experts Riding Coattails:

The field has become ripe with “influencers” and “thought leaders” who have no idea what they are talking about. A lot of people think they are very important as commentators on this topic.

With AI being the biggest technology story in recent years, there has been an effort to cash in on the gold rush from fields that believe they have a legitimate reason to insert themselves into the discussion. This is where you get the “Ted Talk Crowd” trying to cash in. This is where a lot of over the top rhetoric comes from.

A perfect example of this is how many philosophers think they are very important and must be included in the discussion or must lead it. The level of academic hubris here is really pretty grotesque. This is not to say that philosophy does not have a place in the discourse, but uninformed philosophy certainly does not.

For philosophers of the mind, AI seems like catnip. After all philosophy of the mind is not an especially useful field. It sits around asking questions about what consciousness is and never answering them, It has largely been replaced by neurology. But for those in the field it seems like AI poses the questions they feel they need to answer. Major universities, such as NYU, have centers where philosophers sit around pondering what it means if a machine is conciouss. (They’re not conscious. They can’t be)

Of course, these organizations and individuals never bother to check whether it is even a technically realistic question. Doing so would call into question the legitimacy of the career they’re trying to build off of this. There is even an organization researching “model welfare” and “AI rights” ideas. It’s exactly how it sounds: that AI models might themselves be living beings and moral patients. It’s absurd.

Unfortunately, you see more and more of this in AI, and it’s rampant. You see economists, commentators, philosophers, historians all throwing their hat in. There is nothing inherently wrong with that, but few are grounded in the technical reality.

This has been seen to a huge degree in the doom sector. A large number of organizations have been founded that claim to be “AI Safety” or “AI research” in nature, but have zero technical expertise and conduct no research whatsoever. Some may be motivated by a legitimate belief in the cult of doom, but others are clearly just making money off this.

There’s really no way easier to make money than founding a fake charity and doing interviews about fake things that you claim to be an expert in. There are even a number of outfits selling themselves as “experts in AI risk” and are making money doing it.

Public Figures Have Taken the Bait:

One of the most unsettling things about AI doom is seeing serious, respectable public figures respond to it as if it were credible. Granted, none of them are experts, but the constant nonsense been extremely difficult for some to see past. Even those who have a history of advocy for science and reason have fallen for this.

An especially painful example is Stephen Fry, who has been talked into lending his voice to the idea that AI systems might turn against humanity and are an existential risk. It’s not likely that people like Fry have any dishonest motives here, but that makes it all the more difficult.

The AI Doom movement seems almost logical at some level, and it’s receiving the credibility or major organizations and institutions. It becomes all the most self-perpetuating when major figures are convinced that it has credibility to it.

Social Proof and Reward in the AI Sector:

One of the most bizarre things that has been seen, and has advanced the idea, is the fact that many technically connected and high profile individuals have bought into it, to some extent or another. Others have at least endorsed the plausibility of the idea. Still others have not gone so far as to claim that AI will drive humanity to extinction, but have furthered the trope of massive negative consequences such as mass unemployment.

This seems very strange, on the surface. Why would the executives of a company making AI products take such a position? Why would Sam Altman, the CEO of OpenAI even mention the idea of human extinction in association with AI technololgy?

This is one of the hardest things to fully explain. The first thing to keep in mind is that the executives of these companies are not deep technical experts. Most came from more of a venture capital background. So while they may be profiting enormously from selling the technology, their understanding of what the capabilities and risks are is often very elementary.

The problem is that the field is very young and many of the top executives are new to the role and not mature when it comes to the optics of the job. They have been rocketed to fame and fortune and that itself has psychological impacts. The biggest problem is that it gives great virtue signaling. There is a lot of social reinforcement and the optics of being the “responsible person telling the truth” create a strange hero dynamic. Moreover, there is a strong appeal to saying you are in charge of the most powerful technology ever made.

There are a few things going on here, and one that has been floated is the idea of regulatory capture and keeping others out of the marketplace. That may be part of it, but it appears it is primarily signaling. Even within the industry, there is a tendency to try to see very responsible and ethical by considering all risks, even the fictional ones.

There is also the fact that many who are not AI experts have been drawn in by the spectacle. A great example was the “pause AI” letter from “experts.” It’s an idea that seems very sensible and adult: isn’t this tech just moving too fast? Perhaps we should slow down? Well, technology does not work that way, and no there is no major hazard to civilization here.

However, it’s easy to see how this quickly became a setup. People like Bill Gates or Steve Wozniak are not experts in AI, but they’re known as responsible tech leaders, so of course, they sign a letter and lend their voice to those “assuring caution.”

This has only furthered the illusion that there is something here worth considering.

The “Experts Are Concerned” Trope is Untrue and Insulting:

One of the most insulting and untrue tropes that the media likes to put out there is the idea that AI experts are concerned. They have given stories about experts resigning in droves, about developers trying to stop doom and about rampant concern in the sector. You will sometimes hear claims like “the experts developing the technology are the most concerned.” This is absolutely not true and it’s insulting.

Most serious experts in the field have little patience for such nonsense. Having ones life work and expertise reduced to something so stupid is hard for a lot of people to stomach. It’s an especially stupid think for an ML engineer to try to debate someone making fundamental misstatements about the technology. Any expert who has been in a similar situation understands how exhausting it is.

In general, most of the best technical experts are busy developing the technology, while they watch their superiors and various other celebrity idiots bloviating about something they know nothing at all about. The problem here is that the only commentators, the only people out there being quoted and talking to interviewers are grifters. True experts don’t get invited to conferences the way some “Social media influencer” does, regardless of how stupid they may be.

This is absolutely not a problem isolated to AI, but it’s especially bad in this area. The only voices being heard are the non-experts.

The Notable Exceptions:

One of the biggest problems that has happened in refuting AI doom beliefs is the fact that some prominent figures, who are legitimate experts in the field of machine learning, have actually thrown their support behind it. This has drastically increased the potency of the claims. It can be very hard to refute sensational claims when seeming figures of authority are involved.

The saddest and most notable, of course, is Geoffrey Hinton, who has become one of the leading public voices of doom, despite being an expert in the field. Hinton won the Nobel prize for this work on backpropagation, one of the most important concepts in machine learning.

What happened with Hinton is especially sad, when you know the full story. Geoffrey Hinton was a giant in the field of machine learning for decades. He was well respected and known in the field, but the field itself was not large or high profile. For most of his life, he helped build the principals that make machine learning work.

However, by the time generative AI systems, such as large language models, came onto the scene and captured the attention of the public and media, Hinton had retired from most active research. He was working at Google, primarily in a mentor and advisor role, but with little hands on contribution to the technology being developed. Then, suddenly, it was the hottest story of the year and receiving billions of dollars in investment. Hinton saw the field he helped build be taken over by a younger generation with different ideas.

In high achievers, late stage career drift, often motivated by a desire to maintain relevance or fear of being left out, is a common phenomena, but Hinton is especially bad. This same behavior is seen in others such as Linus Pauling and Curtis Lemay. Its sometimes called senior (or senile) megalomania. That might be a big a bit harsh of a term, but what is happening here is quite obvious.

For Hinton, being in his late 70’s, seeing the field being taken over by newer engineers and seeing the academic field he help build move away from him and to commercialism caused a deep identity crisis. Importantly, Hinton did not ever bring up his concerns at Google, and nobody else at the company ever seemed to share them. He instead left and began doing the lecture and interview circuit.

Hinton has lent a lot of credibility to these arguments. For many, he seems unimpeachable. He’s been called the “Godfather of AI” and he has won numerous accolades, including the Nobel Prize. His rhetoric has gotten increasingly unhinged and detached from reality. Hinton has recently begun saying AI is probably conscious. It’s very sad to see someone degenerate to this level.

Yoshua Bengio is another individual who seems to have gone off the deep end. He has a long history of working in machine learning in academic settings, but seems to have also suffered an identity crisis. It’s not the first time an academic felt blind-sighted by a field becoming commercially-centered after years of research.

It’s important to keep in mind that this has not been a mass phenomena. The overwhelming majority of experts have not gone off the deep end, but it’s unsettling to see some having such deep personal problems. It’s also very harmful to the ability to discuss AI risks accurately and dispel these vicious myths.

Poor Reporting on Real Events

There have been a number of reports of seemingly frightening behavior by large language models in edge cases and when under red team tests. Most notorious of these incidents was the “Claud blackmail” study, in which Anthropic put chatbots into a scenario where they were threatened with shutdown and seemed to resort to blackmail tactics to try to persuade the manager not to shut down the chatbot. There have been other similar behaviors, where chatbots were given options to kill someone, to report wrongdoing or otherwise act out in ways that would not be desired.

The result is that this has contributed to the narrative that large language models are evolving self-preservation desires or are engaging in complex planned behavior with intent. To those not familiar with the way the technology works, it absolutely can look like that. It looks very unsettling. It can easily contribute to the idea that LLMS are plotting behind the scenes, that they may do things of their own accord or that they are evolving in ways we don’t understand.

That’s not that is happening here. It’s really a type of behavior that is entirely expected and does not really represent any kind of emergent capability.

Models don’t have intent, they can’t plan and they are only responding in real time to the environment they are in. What is happening is that, when presented with few valid options and constraints, the LLM is resorting to recreating a narrative. LLMs are especially good at creating sensible and consistent narratives, and that can happen in cases that are designed to elicit such response. And in most cases, these strange behaviors happened in such extreme cases.

The thing to keep in mind is these are not brilliant or unstoppable behaviors. They would not have even really worked, in most cases, since shutting down the bot would stop it from blackmailing. It’s an interesting demonstration of something very real, however: LLMs can react in strange and unexpected ways in edge cases. On rare occasions they may recreate strange narratives or logical patterns. It’s necessary to keep this in mind when deploying this technology.

The problem therefore is not that it is growing a mind or self determination. It’s simply an unusual technical failure under stressed conditions. This is something to be expected and which can be architected around.

The “AI Safety” Movement is a Joke:

With AI becoming a bigger and bigger part of society and the economy, it seems obvious that there should be concern for risk, safety, security, bias and malfunction of AI. Unfortunately, most of the world that labels itself “AI Safety” or “AI Alignment” is ripe with pseudoscience and claims that are not at all based on reality. It should be noted that few of those in the field of AI safety truly understand how the technology they are trying to manage actually works. This is an area that has very few technical experts at all.

Because of this, there is very little serious work being done on issues of AI malfunctions and real AI risks. Most of the oxygen in the room has been taken by ai “researchers” who are married to ideas of AI becoming sentient, gaining super powers, acting in malice or otherwise completely unfounded and highly speculative ideas.

The related field of AI alignment is founded on entirely false assumptions. At its core, it is supposed to be assuring the the AI is “aligned” with human values and goals and that it acts morally and responsibly. The problem here is that you can’t teach an AI model values, you can only craft its outputs to conform to the rules you consider reasonable within those values. You can’t give an AI desires or force it to be empathetic. It’s a machine running a mathematical model.

Again and again, in this field we see the same contrived myths come up, despite being completely fictional and not based on any real expert analysis or real world data.

There are, of course, real risks and they should be explored and mitigated. However, in order to understand how the technology might malfunction, what its next developments are likely to be and what reasonable expectations are, you really do need a foundation in the technology. Those who do not almost always end up overestimating capabilities, underestimating limits, and above all else, anthropomorphizing the technology and assuming it is thinking about things in a human-like way.

Common Unfounded Rhetoric:

We have seen a number of statements be made in the area of AI alignment and AI safety, which have become so common that they are now repeated almost as if they are fact. To those who do not have any grounding in the technology and do not understand its fundamental limits, architecture and development cycle, many of these statements seem to make sense.

Alignment and Safety Research is Very Urgent – This is taking a real issue, risk management in AI systems and putting it entirely in the wrong context. While it is important that risk management be part of deployment, it’s not an existential threat to human kind. The idea that “Soon AI systems will run the world so we better make sure they do not turn evil soon” is pure fantasy. It is neither civilization critical nor some kind of unsolved problem.

Alignment is an “Unsolved Problem” – Again, the entire basis of alignment is often saturated with pseudoscience, but the idea that this is somehow uncontrollable or that there are deep unsolved problems is simply not true. It’s often presented as some kind of urgent problem that needs solving.

Infinite and Exponential Improvement – The idea of “recursive self improvement” to the point where AI systems simply improve exponentially to the point where they achieve near perfection seems appealing. It appears to make sense that technology improves. It also seems to make sense that “If AI is smart, can’t it just build the next generation of AI?” But this is, at best, overly simplistic. Neural scaling laws really only apply to certain mediums and do not allow for achieving things that are beyond the medium. Scaling does not solve fundamental problems. Moreover, the process of building and testing models is complex and involves a lot of trial and error and management. Models are nowhere near the point of fully managing a model creation pipeline.

“Misaligned” AI is Dangerous – Not any more so than any other consumer product failure. Yes, AI that misbehaves can cause problems. Those problems are proportional to what the AI is being used for. “Misalignment” simply means that the AI does not work as intended or does something unwanted.

AI Has Goals – The idea of “instrumental” goals and sub-goals is part of the foundation of the entire premise of AI doom. It’s not something that can happen in current or foreseeable systems, because these systems are only designed to predict patterns, not accomplish its own self-derived goals.

AI Might Seek Power – Again, this has never been seen in practice. The idea is that AI systems might realize they can game the system to gain power and this might seem somehow desirable to the system. Why? Perhaps to accomplish a goal or something.

“We don’t know how to control it” – Again, this somehow seems to be logical. “If it is smarter than humans then how do we control it.” This is just a category error. AI systems do not have any cognitive autonomy, are task-mount, static and are in no way shape or form similar to human intelligence.

“Instrumental Convergence” is a thing – This is a completely invented idea that an AI system would want to take all the power in the world and control all resources because “any goal is best accomplished with absolute power and control” or something like that. It sounds almost profound, but of course, it’s entirely contrived and the concept itself really only works within certain constraints.

We are on the verge of AGI or Superintelligence – This is used to reinforce the urgency tone of the statements. We’re nowhere near anything that could reasonably be called human like capabilities in all domains. The very notion that technology is on the verge of superintelligence is silly.

AI Might Be Hiding Something or Deceiving us – This is not possible in the human sense, because AI systems do not have any internal understanding of what is true and false. It’s true that an AI system might act in a way that increases human satisfaction, and this could lead to behavior aligned toward pleasing and not truth, but this is controlled for when training. AI does not have intention and cannot make the choice to lie.

AI Models are “Black Boxes” and We don’t Know How They Work – This statement is common and it’s at best overly simplistic. While modern AI models are too complex to completely trace out all circuits and features, we know the first principals of how they work. For example: LLMS compress semantics into a vector space and use matrix multiplication to transform features and concepts in a parameter space. Most of what is going on is well understood.

We Need to Be Careful Or Consciousness Might Emerge – This is another error based on a misunderstanding of the workings of AI. It’s mot conceivable that we are close to sentient or conscious machines. It is true that they simulate human like capabilities with greater and greater fidelity, but AI models are absolutely not conscious and don’t have feelings.

AI Is Gaining Self-Preservation Goals or other Motivations – This comes down, primarily, to the fact that models can and have been pushed into narrative creation by adversarial conditions. The most commonly cited example is the Claud blackmail study, which, when framed a certain way, sounds extremely alarming to those without a full understanding of how the models work.

General Excessive Anthropomorphization – Discussion of what AI “wants” or using terms like “digital minds” or “digital beings” and immediately presuming intentions, goal-formation or extreme capabilities is a natural result of the fact that most people do not have a mental model to explain how a technology can speak and interact in a way that is superficially very human. Without any technical grounding, the tendency to slip back to anteriorization is only natural.

Pseudoscience Invading Reasonable Discourse:

One of the most annoying aspects of the entire doom is its invasion into reasonable discourse and discussion of AI risk, policy, governance and quality control in places it should not. Places like Google Deepmind have given lectures that lend credence to the idea of super intelligence as a likely result of current technology development. AI developers are constantly finding doom adjacent beliefs showing up in governance and risk discussion.

Some extremely stupid conclusions have been brought up here. For example, the idea that “AI is getting better at all kinds of things and therefore will likely exceed humans at all things.” It’s shocking to see such poorly formed ideas being circulated in frontier labs. Part of this is likely the fact that there is such an intellectual desert of people who understand how models work and who are more aligned with philosophy, ethics or just comentary.

It’s becoming difficult to legitimately know who to believe. The company Anthropic may be one of the worst impacted. Because few of the oversight and policy roles are filled with people who understand the technology, it’s started to embrace some nutty ideas of AI risks, although other labs have not been immune.

The problem really is that it’s become hard to find reasonable reality-based discussion on AI policy, AI malfunctions and AI risks. The blurring of the line between legitimate AI science and pseudoscience has gotten extremely bad.

It Causes Real and Tangible Harm Caused

The lies, opportunism and overall grifting of AI doomers is condemnable in and of itself, but when one steps back, it is undeniable how deeply dishonorable and condemnable this movement is. It really takes a certain level of depravity and lack of ethics to go out there and try to promote yourself and make money by scaring old ladies that robots will kill their grandchildren. This is not an academic thing. People have been very frightened and lives have bene upended by these harmful lies.

But beyond that lies the real problem and the extreme danger that is caused by the nonsense of AI doomerism. AI is extremely transformative and it is a powerful family of technologies, which are changing how the world works. The rollout of AI technology is not risk free, even if the risks are not existential to the human race.

The risks of AI are wide and varied. On an individual level, the technology can malfunction and produce errors. It can provide convincing but untrue information. AI models may not always follow instructions properly and can be the subject of hacking and security incidents.

Beyond the individual risks, there are risks that society needs to consider. AI may create social isolation or may make work less fulfilling. There has been much talk about job disruptions, which could be severe in some sectors. There are concerns about deep fakes, privacy and security. There are also concerns about AI concentrating power or wealth in the hands of those who can afford it.

These concerns are real. Adopting AI in a manner that is aware of the downsides and properly regulates use, while controlling risk is critical. It can’t be overlooked, but it absolutely is. With so much focus and noise about AI taking over the world, there has been almost no effort placed into the real risk and regulatory side of AI, and that’s a big problem.

AI is serious and not a joke. It’s important to understand its failure modes and potential social risks, but this level of nonsense keeps getting in the way. It should not be tolerated.

The AI 2027 Effect: Stupidity Sells

AI 2027 is an especially potent example of just how bad the effect of media amplification and narrative capture has gotten. AI 2027 is a piece written by a newly minded “non profit” which is run by self-proclaimed AI safety experts and claims to be a rigorous scientific think tank.

The “report” AI 2027 was an entirely speculative piece of science fiction and not at all based on any real projection of where AI technology is or might end up. It put forward the idea that AI large language models might scale infinitively with computer (they can’t) and advanced the idea of recursive self improvement under international pressure as being a realistic path to a dangerous super intelligent entity.

The entire premise and support is poorly written and entirely based on wild and unrealistic speculation. It waves past the fact that scaling does not work like that, the limits of ML architecture, the problems with updating models, the limits on improvement, the model creation pipeline. It’s not serious at all. It’s extremely poor psuedoscience. It’s actually almost a joke. It claims that within the next two years, humanity will have somehow moved from clumsy token predictors to god-like super intelligence.

God like. God like super intelligence. Stop and think about that for a second.

Despite being poorly written fiction, it is presented as some kind of science based projection of what things are now and where they are likely to go. It’s supposed to be rigorous, and it sure tells us it is, with lots of charts and science-sounding words.

Despite the obvious absurdity of these claims, this has resulted in a lot of press and a great deal of elevation of status of the “study” authors. They have been invited onto otherwise reputable press outlets, podcasts, youtube channels. They are even being taken seriously by the New York Times and other adult media outlets.

It’s very easy to see why this area has been growing like wildfire. If you can call yourself a “whistleblower” or “hero” and write an incoherent story of doom, you will be rewarded with a career in which you don’t actually have to work and where you’ll be treated like some kind of hero for saying machines might turn on people.

Leave a Reply

Your email address will not be published. Required fields are marked *