Thursday, 9 April 2026

Artificial Intelligence: Digital Utopia or Dystopian Nightmare (Addendum 2026)

In late 2023, when large language models had just begun to seize the public imagination, I wrote a short essay to think through where AI might be taking us, and three years on it seems worth revisiting those original premises in light of how quickly the technology, and the debate around it, have evolved.



I think the broad thrust of the original article still seems right.

The first and most obvious change is that large language models are no longer merely text generators that sometimes say clever things, sometimes hallucinate and generate images with 100 fingers. They have become increasingly multimodal, better at coding, better at using tools, and better at handling long structured tasks. The old chatbot has become a much more useful tool.

That does not mean the core reliability problem has disappeared. These systems still make things up, miss context and require verification, especially in technical or scientific settings. But it is no longer serious to dismiss them as glorified autocomplete. They have crossed the threshold from novelty to utility, and in some fields from utility to genuine leverage.

In 2023, I wrote that it is experts who can truly push these systems to their limits, and that remains largely true. What has changed is that the floor has risen. Non-experts can now get genuinely useful work out of these systems far more easily than they could in 2023. The moat has narrowed.

The likely disruption then is more subtle, and perhaps more alarming for that reason. As AI lowers the cost of first drafts and routine cognitive work, smaller teams can now do work that previously required larger ones. Less experienced workers can produce passable results for longer, which makes substitution easier in some contexts. Experts will still be needed, increasingly to validate, debug and judge, because someone still has to understand the system well enough to know when the machine is wrong. Employers can ask more from fewer people. What then happens to apprenticeships?

Even where jobs are not eliminated outright, parts of jobs are being hollowed out or repriced. The social effects can arrive long before the dramatic headline event of "mass unemployment". Fewer junior positions, weaker bargaining power, and the gradual decoupling of output from payroll can reshape society without anyone ever being able to point to a single clean moment when the machine took over.

On creativity, AI really is lowering barriers to entry in writing, illustration, music and design. That is liberating for many people. But we are also getting much more sludge. Production is cheaper. Attention is scattered. Distinctive style matters more.

In 2023 it was already obvious that creators would object to having their work vacuumed into training data without consent or compensation. Since then, the European Union's AI Act has begun to apply in stages, and the copyright and licensing disputes around training data have moved into the legal domain. The conversation continues. Napster changed the music industry. This too shall be settled.

Schools and universities can no longer pretend that students will not use AI or that the old assessment structure can be defended by raised eyebrows. If a machine can routinely solve standard exercises, draft essays and explain intermediate steps, then education has to lean more heavily into interpretation, judgment and oral defence. In other words, it must place more value on the distinctly human parts of thought. Frankly, it should have been moving in that direction anyway.

I now regularly ask my students to present their work orally and we discuss both the problems and their different approaches. I tell them to think about the problem themselves first and try to solve it with their group. If they want to use AI, I tell them to use it creatively: ask for hints when stuck, explore unconventional approaches, and clarify confusing concepts. They still need to understand the problem well when they present their solutions in front of the class.

I ended the 2023 essay saying that some sort of UBI will be unavoidable, and I still think that. A full UBI remains politically difficult and fiscally contentious. But a broader search has begun for ways to decouple a meaningful part of economic security from traditional wage labour. Whether that becomes UBI or something similar is still unclear.

That, to me, is the key point. It is not that AI will abolish all work, but that it can reorganise labour markets and income distribution quickly enough that the old social settlement begins to crack. If productivity rises while the gains accrue mainly to those who own the models, chips, data and capital, then societies will eventually face a blunt question: how exactly are ordinary citizens supposed to share in prosperity?

The discussion has already started. It revolves around guaranteed income, public wealth funds, and other ways of giving people a firmer economic floor in a world where wages may no longer distribute prosperity reliably enough.

We are still, I think, a long way from the end of this story. But we are no longer in the opening scene either.

No comments: