By Michael Borella

The current cultural moment around artificial intelligence has the hallmarks of a moral panic.  The doomsayers warn of job-annihilating robots, misinformation that will corrode democracy, and energy-hungry data centers boiling the oceans one large language model (LLM) query at a time.  The accelerationists, meanwhile, promise us that superintelligent systems will cure cancer, solve climate change, and perhaps even make getting through airport security slightly more tolerable.

Both camps are not asking the important questions.  If and when they do, a different picture will emerge.  AI is a genuinely impressive technology.  It is also, at its core, a sophisticated autocomplete engine with the confidence of a tenured professor and the reliability of a freshman chugging their third energy drink on the way to pulling an all-nighter.

The AI apocalypse is almost certainly not coming.  Current AI systems are disruptive and powerful, but the very architecture that provides their remarkable capabilities also imposes ceilings on what they can reliably accomplish.  Add to that the likelihood of large-scale human pushback, and the more plausible forecast is a future of incremental productivity gains, persistent human oversight, and an extended regulatory tug-of-war.  Civilization will survive, though it will be different in coming decades.

The Risks Are Real

It is worth briefly acknowledging that some of the concerns about the proliferation of AI are legitimate and deserve more than eye-rolling dismissal.

AI-generated misinformation is a major problem.  LLMs produce plausible-seeming text with indifference to whether what that text represents is true.  Deepfakes and AI-generated content have already been deployed in elections, financial fraud schemes, and to submit made-up legal citations to courts.  The ability to manufacture convincing falsehoods at scale and negligible cost to the end user is a genuine challenge that is only beginning to be reckoned with.

The job displacement concern is also not without merit, though its contours are poorly understood.  History suggests that new technologies create new categories of work even as they destroy old ones.  The industrial revolution did not end human labor, but it did change the nature of work.  AI is likely to do the same, but this is insufficient reassurance to the programmer, radiologist, or paralegal currently updating their resumes.

Then there is energy.  Training frontier AI models consumes electricity at a scale that would make a small nation blush.  The buildout of data center capacity to service AI is straining power grids across the U.S.  Whether the societal return on that energy investment justifies the cost is an open question.

These are real issues that deserve serious policy attention.  What they do not justify is the leap to existential catastrophizing.  The notion that we are sleepwalking toward a world controlled by, or destroyed by, machine intelligence is considerably less supported by the evidence than its proponents would have you believe.

Current AI Models Are Fundamentally Limited

To understand why AI will not take over the world, it helps to have a clear-eyed view of what AI actually is and, more importantly, what it is not.

An LLM is a system trained to predict the next token in a sequence.  It does this extraordinarily well.  Given enough neural network parameters and training data, an LLM can produce text that is virtually indistinguishable from human writing, engage in multi-step reasoning, write functional software, and summarize complex documents.  These are genuinely useful capabilities.  They are not, however, capabilities that arise from anything resembling human understanding.

The probabilistic nature of LLM output is not a minor implementation detail.  It is a core architectural feature that allows an LLM to function.  An LLM does not retrieve verified facts from a database.  Instead, it generates statistically-likely responses to prompts.  This is why hallucination (the generation of false information) is not a bug, because it is baked into fundamental LLM design.  To produce an essay, an LLM makes hundreds educated guesses.  Sometimes the guesses are very good, and sometimes they are embarrassingly wrong.  The LLM cannot tell the two apart.

Efforts to mitigate hallucination are a topic of current study.  Retrieval-augmented generation (RAG), constitutional AI, chain-of-thought reasoning, and various forms of reinforcement learning from human feedback (RLHF) can make LLMs more truthful.  But hallucination has not been solved and the research does not yet offer a credible path to this goal.

Another structural limitation is regression to the mean.  LLMs are trained on the aggregated textual output of human civilization.  Thus, they are fundamentally designed to produce the “average” of this training data.  The result is writing that is competent, inoffensive, and often uncanny in its blandness.  It will not have the insight of a great scientist, engineer, or attorney.  For many tasks, that is more than adequate.  But it is not enough for tasks that benefit from originality or rhetorical flair.

A further architectural constraint receives surprisingly little attention in popular discourse.  AI systems require sets of training data of almost incomprehensible scale (as noted, the digitized textual output of human civilization).  That supply is not infinite, and by most credible estimates, is approaching exhaustion.  Synthetic training data, text generated by AI systems and fed back into their training pipelines, may not be a viable workaround in many scenarios.  Models trained substantially on the output of other models risk amplifying existing errors, narrowing the distribution of outputs, and introducing feedback loops whose long-term effects on model quality are not yet well characterized.

The architectural path forward, such as scaling transformer-based models with more parameters, more data, and more compute, appears to be yielding diminishing returns.  The improvements from GPT-2 to GPT-3 to GPT-4 were dramatic.  The improvements at the frontier today, while real, are considerably less so.  Meaningful advances in AI capability are increasingly dependent on non-AI tools and significant human engineering.  The current transformer-based paradigm is not a dead end in the sense that it will cease to improve.  But it may be a dead end in that incremental improvement along the current trajectory will not produce the qualitative leap to general intelligence that the doomsday scenarios assume.

As of 2026, the AI community finds itself in an unresolved argument about what comes next.  Proponents of so-called world models, most prominently researcher Yann LeCun, contend that for AI to move beyond sophisticated text prediction it must develop an internal representation of physical reality – such as object permanence, three-dimensional space, and cause and effect.  In theory, a system that can simulate or at least understand the consequences of its actions before taking them would address some of the problems that make current LLMs unsuitable for autonomous operation in the physical world.  But building such world models is different and substantially more challenging than developing LLMs, requiring vast quantities of video and sensor data, as well as levels of compute that make frontier LLM training look modest by comparison.  Whether world models represent the next iteration of AI or an enormously expensive detour remains open.

Mission-Critical Tasks Require Humans

The probabilistic nature of generative AI does not merely limit its intellectual output.  It imposes a ceiling on its deployment in high-stakes environments because the most important software-driven systems are intolerant of the error rates that LLMs inherently produce.

Consider the following.  You are asked to use an AI-controlled robotic system to manage the construction of a nuclear power plant from design to commissioning.  The system will route electrical conduit, precisely specify pressure relief valves, calibrate reactor control systems, and manage thousands of interdependent engineering decisions.  Even if the genAI system hallucinates only two percent of the time, such an error rate is not a just a minor inconvenience when tens of thousands of decisions are being made.  It is, literally, catastrophic.

The nuclear plant example is deliberately melodramatic.  But the underlying principle applies across a surprisingly wide range of domains, including pharmaceutical manufacturing, surgical robotics, air traffic control, and financial settlement systems.  These are environments in which a confident wrong answer is worse than no answer at all.  The tolerance for AI error in mission-critical contexts is, and will remain, essentially zero.  This means that even as AI becomes more capable, the most consequential applications will still require humans in the loop.

The economic implications of this constraint are underappreciated.  A great deal of the hype around AI productivity gains rests on the assumption that AI can substitute for human judgment in complex workflows.  The reality, so far, is that AI tools require substantial human oversight to deliver reliable output.  Engineers need to validate AI-generated code.  Doctors need to check AI-generated diagnoses.  Whether this human-as-quality-assurance arrangement is more productive than the pre-AI workflow depends heavily on the task.  For many tasks, we just do not know.

It is also worth noting that the most apocalyptic scenarios require AI to do things in the physical world, such as building, breaking, moving, and operating physical objects, perhaps with the help of physical tools.  However, the gap between an LLM that can describe how to fix a leaky pipe and a robot that can actually fix one in an unstructured environment is enormous.  Robotic dexterity, spatial reasoning, and reliable physical manipulation remain genuinely hard problems that have resisted decades of sustained engineering effort.  This has led to Moravec’s Paradox, the observation that it is relatively easy to make computers do high-level reasoning (e.g., solving math equations or playing chess), but incredibly hard to give them the motor skills of a one-year-old.  While the Terminator may walk with purpose through a rain-soaked factory, actual state-of-the-art robots struggle with stairs.

This is not an argument against AI’s utility.  AI tools are genuinely useful for automating routine tasks, accelerating information development, and performing pattern recognition at scales no human could match.  It is an argument against the more extravagant claim that AI will autonomously manage critical systems.  It will not, unless we are prepared to deal with the consequences of reactor meltdowns, planes falling out of the sky, and medical misdiagnoses.

The Economics Are Unproven

The economics of frontier AI development are without precedent.  Training a state-of-the-art LLM costs hundreds of millions of dollars (GPT-4 was estimated to have cost over $100 million to train).  The hardware required is produced by a handful of chip manufacturers, principally NVIDIA, whose graphical processing units (GPUs) have become the scarce resource of the AI economy, complete with the supply constraints, geopolitical entanglements, and rent-extraction opportunities.  Inference (running a trained LLM to answer queries) is cheaper per transaction but enormous in aggregate and difficult to make work at consumer price points.  OpenAI, the company most responsible for the current mania, has reported losses in the billions annually even as it charges for subscriptions and application programming interface (API) access.

The current AI regime is being sustained by a combination of hyperscaler subsidies and venture capital.  But investor patience has an expiration date.  When the capital markets decide they would like to see a credible path to profitability, the AI industry will face a reckoning.  Transformative technologies have a way of outlasting the economic bubbles that initially fund them.  But they also tend to emerge from those corrections smaller, more focused, and considerably less apocalyptic than their peak-hype incarnations suggested.

Humans Will Push Back

Another limitation of AI is not technical.  It is sociological, and perhaps the most underappreciated.

Frank Herbert’s Dune imagined a future civilization that had, after a catastrophic war with thinking machines, banned the creation of computers that could mimic the human mind.  This “Butlerian Jihad” against the machines reshaped an entire civilization.  It is tempting to dismiss this as just science fiction, but worth considering whether it might also be prescient.

The current trajectory of AI development is subject to a level of social friction that is routinely ignored in projections of AI’s future dominance.  We are already seeing the early signs, such as writers organizing against AI scraping and artists watermarking and legally contesting unlicensed training data.  Educators have responded to AI-generated student work by bringing back the oral examination, the in-class essay, and the handwritten blue book.  Courts and regulatory bodies across multiple continents are in various stages of constraining AI use.  The European Union’s AI Act imposes significant requirements on high-risk AI applications.  The U.S. has so far failed to regulate AI in a meaningful way at the federal level, but a number of states have stepped up.

None of this adds up to a Butlerian Jihad.  But the pattern is recognizable.  When a powerful technology generates sufficient anxiety and harm, societies develop mechanisms to constrain it.  Nuclear weapons development and use is subject to complex framework of treaties, inspections, and norms to limit spread.  Social media was initially hailed as a democratizing force, but now more and more jurisdictions are setting forth content moderation regimes or even outright bans on its use for children.

AI will not be exempt from this dynamic.  The more visibly and consequentially it disrupts labor markets, spreads harmful content, and operates in ways that feel alien or unaccountable to ordinary people, the more vigorous the pushback.  Regulation, litigation, consumer preference, and cultural norms are all mechanisms by which human societies have historically constrained technologies that overstep their welcome.  There is no reason to believe that AI will be any different.

The most likely outcome is not a ban, but a thicket of regulations and liability regimes that channels AI development in socially acceptable directions, limits autonomous AI authority in high-stakes contexts, and keeps meaningful human control firmly in the picture.  This is not how sci-fi imagines the story ends.  It is, however, how stories of transformative technology have consistently ended throughout history.

The printing press did not destroy the Catholic Church or dissolve political authority though it destabilized both and generated a century of violent upheaval.  Eventually it motivated the regulatory and institutional accommodations we know as copyright law, licensed publishing, and the Index Librorum Prohibitorum (possibly history’s least subtle attempt at content moderation).  Terrestrial radio and television are still subject to spectrum licensing and content restrictions.  None of these technologies were banned and none of them took over.  Each of them was, after a period of social turbulence that probably felt at the time like it might never resolve, absorbed into the fabric of society and made to operate within constraints that reflected, however imperfectly, collective human values.

The more unsettling possibility is not that society fails to constrain AI, but that some actors constrain it for everyone else.  Put another way, the immediate risk of AI is not the machines, it is the humans holding the reins.  A technology that concentrates unprecedented informational and predictive power in the hands of a small number of governments, unelected technologists, or rogue actors actually presents a human threat.  Whether this threat materializes as surveillance states, tech elites with the ability to manipulate markets or elections at scale, or non-state actors capable of deploying AI-enabled disinformation or cyberweapons, the common thread is not artificial intelligence run amok on its own accord.

This is, in a sense, the oldest story in history.  Those who control the most powerful tools of an age tend to use them to entrench their advantages, and the rest of society spends the subsequent decades building institutional constraints to claw that power back.  If it can.  The question then becomes not whether AI is taking over, but who is using it in an attempt to self-empower and what the rest of us can do in response.

Conclusion

The forecast for AI is neither utopian nor apocalyptic.  AI systems consist of a set of useful and powerful, but limited tools that will reshape specific industries, disrupt labor markets, create new categories of work and productivity, generate new categories of harm and liability, and topple incumbent companies, but ultimately operate within the constraints that human societies impose upon them.  That is the arc of history for transformative technology, and at this point in time there is no compelling reason to believe that AI will break it.

The machines are not taking over.  They are autocompleting with impressive sophistication and growing capability.  They are also hallucinating, failing in mission-critical contexts, and generating precisely the kind of outrage that eventually produces regulatory and cultural constraint.

The panic is understandable.  The technology is genuinely disruptive, the pace of change is genuinely fast, and the social consequences are genuinely uncertain.  But disruption is not domination, and fast-moving is not unstoppable.  The unexciting prediction that AI will be an enormously consequential tool that remains just a tool is not nearly as eschatologically satisfying as the apocalypse.  But it has the advantage of being more likely to be true.

Posted in

Leave a comment