Sunday 31 December 2023

Artificial Intelligence : Digital Utopia or Dystopian Nightmare?

They shamelessly print, at negligible cost, material which may  inflame impressionable youths, while a true writer dies of hunger. Cure the plague which is doing away with the laws of all decency, and curb the printers. They persist in their sick vices, setting Tibullus in type, while a young girl reads Ovid to learn sinfulness. [...] Writing, which brings in gold for us, should be respected and held to be nobler than all goods, unless she has suffered degradation in the brothel of the printing presses.” So wrote Italian Benedictine monk Filippo de Strata in a letter to the Doge of Venice in 1490, complaining about the introduction of the printing press to the city.


["The Unrestrained Demon". An anti-electricity cartoon from 1889.]

Throughout history, each wave of technological innovation has been met with its own unique blend of curiosity and trepidation. Take, for example, the discovery of electricity; Initially regarded as little more than an amusing novelty, with public demonstrations of its effects ranging from Thomas Edison’s mildly entertaining electric pen to the more macabre spectacle of electrocuting animals, electricity also came to be feared as a potential danger to public health. As the decades passed, worries gradually subsided as the profound utility of electricity became apparent even to the staunchest skeptics, eventually establishing it as the bedrock of modern civilization. Consider another example: the Luddites, a group of textile workers in 19th-century England, who went about destroying weaving machinery to protest against job displacement, and who did not shy away from resorting to violence. 


These historical instances reflect an enduring concern: the anxiety accompanying new technologies and the fear that human skills will be rendered redundant. Similar concerns are directed towards the field of artificial intelligence (AI) today. 


AI has been with us for at least several decades, making great strides since the introduction of rules-based systems in the 60s, but it is only in the last few years that a significant milestone appears to have been achieved through a combination of large language models (LLMs), which are examples of Deep Learning processes, and the truly massive data sets that are used to train them. Although Artificial General Intelligence (AGI), which many consider to be the Holy Grail of AI research, still seems to be out of reach for now, narrow AI is routinely outperforming humans in several different highly specialised tasks.


So, what are we to do now in the face of AI's relentless march forward? Should we cross our fingers and hope for the best like new Pollyannas, or are we to become neo-Luddites, smashing away at every AI creation in the digital domain? The answer is neither. For one, these are still the early days and the technology is in its infancy. Granted, it has already demonstrated it can provide assistance in a number of different domains, but it often makes mistakes, hallucinates and is not particularly creative unless carefully steered by an expert user. 


An AI agent can provide seemingly insightful responses to questions about highly specialised subjects where an average person lacking the expertise would be unable to. Experts in particular topics, such as programmers, engineers, artists, scientists etc., have the necessary training to pose complex technical questions using highly specialised terminology and are able to understand the contextually specialised responses of the AI. These expert users are also able to identify problems and inadequacies in these responses and improve them through successive queries which, again, the average person is not in a position to do, due to lack of specialised training. 


On a practical level, an expert user is therefore able to employ the AI systems of today as semi-skilled collaborators, and to drive gradual improvements, seeking additional help as and when it is needed. It is only the experts, who have spent a lifetime honing their skills, that have the necessary know-how to push this technology to its limits, far beyond what a casual user is capable of.


For instance, suppose you compose music for a living, and that there exists an AI agent that can help with composing music. You could ask it to prepare a template for a theme that, even though it has yet to completely coalesce in your mind, you know that it must be in C minor and that the arrangement is reminiscent of Baroque works by, say, Telemann and Pergolesi. You guess the sound you have in mind is probably about 80% more similar to Telemann and only about 20% Pergolesi. Maybe there’s even a bit of Corelli in there but you are not sure. You know exactly which instruments you want to use, you know the harmony, etc. and you pass all this information to the AI asking it to give you a test theme. Maybe you don't like what it gives you. You ask it for variations until you find one that roughly matches what you have in your mind, or one that clicks and inspires you. Then you ask it to put all the notes on a staff, print the score and you edit the details making adjustments. Then you scan the score you worked on and send it back to the AI asking it to maybe improve the timing or change the speed of this or that note, until the result sufficiently satisfies you and is close to what you envision. In short, this collaboration with the AI will significantly simplify your work as a composer, while you remain the creative director of the entire process. 


Is this inappropriate? Consider Hans Zimmer, the famous film score composer, who can afford to employ a number of other composers, as well as orchestrators and sound engineers, to help him write and arrange the music for his movie scores. The use of AI could allow budget-constrained and less well-known composers to do something similar, and perhaps even to become competitive and get their music to reach new audiences.


All this will lead to a democratisation of the creative process and a creative inflation that will have a lasting impact across every professional field. To appreciate how this may play out, let us continue with our thought experiment in the music industry. Supply and competition will certainly be greater, with the marked difference that the playing field will be more level, with smaller composers now having the ability to challenge more established names. Output will exponentially increase, making it far more challenging to build and maintain a lasting reputation. In an ocean of mediocre compositions, the deciding factor will inevitably become the uniquely personal touch the composer imparts to their music. 


None of this is sufficient to conclude that fewer people will consider music as a viable career option, but it will most definitely affect how musicians build a career. It is also unlikely that people will suddenly stop wanting to learn how to master musical instruments, a difficult process which did not disappear even when synthesisers and electronic music were invented. There may even be increased interest in attending live events, such as concerts and recitals. 


Take the example of painting and photography. Photography did not destroy painting; it rejuvenated it. It redefined the meaning of the art of painting and as a bonus created the altogether new branch of artistic photography and related professions. When faithful representation in painting succumbed to the undisputed superiority of the precision of the photographic plate in the late 19th/early 20th century, the creator was freed from a strict adherence to realism, thus giving birth to modern art, and a new generation of groundbreaking artists came to the foreground: Picasso, Dalí, Monet, Manet, Kandinsky, Van Gogh, and many, many others, all mounted this new wave. 


There is not enough space here to elaborate on the very significant social knock-on effects these developments had; this is an exercise better left to historians. The bottom line is that it would be at least disingenuous to persist on the claim that photography destroyed painting, for the additional reason that realistic representations still remain an active branch of painting today. 


In today’s modern world, individuals who make a living exclusively in the field of creative painting, have a much easier time doing so compared to their predecessors in the 18th and 19th centuries, who could achieve little without the continued support of rich sponsors. Of course, we must concede that all this was made possible due to fundamental changes in social conditions for the better, but these very changes themselves were significantly influenced by the historical developments mentioned previously. It is hard to disentangle with a high degree of confidence exactly how all these trends fed on each other. Society huffed and puffed, blew the doors down, and replaced an obsolete structure with a more elaborate one.


In the numerical sciences, the pocket calculator, and later the computer, did not eliminate the need to learn algebra. They accelerated the ability to perform complex calculations to an incredible degree, but did not make the learning of the underlying mathematical rules and methods irrelevant. We still continue to teach these rules and methods all the way from elementary school to university. 


Where these technological developments have clearly made a difference, is in the fact that we now recognize that there are better mechanisms available to us for controlling the accuracy of our results and for minimising errors, and we employ them. No researcher today would expect their doctoral student to perform all calculations by hand, because of the comparatively greater likelihood of introducing small errors somewhere along the line, which can cost greatly, both in time wasted and increased frustration. What researchers are interested in, is the proper scientific analysis of their measurements.


In any case, the further development of AI is of such great importance that it now constitutes a strategic necessity for every country, so any discussion about impacts and limitations should start with this as a given. There are  problems and challenges that are indeed significant, but they are potentially solvable by adjusting existing socio-economic models or by introducing novel solutions, as has happened again and again in the past. As societies gradually grow more accustomed to these changes, they are better able to absorb cultural shocks and reorganise around new points of equilibrium. Perhaps the greatest challenges AI will bring are of a different nature. For example, how will we be able to prevent a race to the bottom when it comes to autonomous smart weapons, and how can we ethically align the goals of a generalised AI with human goals. AI is designed to find unique solutions to very specific questions, and these may not always be the answers we would hope for. This is especially true when the AI is faced with complicated ethical choices.


Now let us consider what AI can do for teaching and research. Yes, AI will eventually be able to solve standard school exercises, and explain all the intermediate steps in detail. That is, it will be able to provide specific solutions to well-formulated questions from a range of known types of problems. This suggests that the education system will need to adjust in response, and focus less on methodology and information gathering, which can easily be automated, and more on developing critical thinking and analysis skills. On the fringes of scientific research, we often don't even have well-formulated questions, nor do we know exactly what questions it would be best to ask, nor how exactly to interpret the results if there is insufficient data. Solutions are often multidimensional and not unique, and AI methods can help us navigate through a complex parameter space.


Many parents and teachers have raised concerns about the state of math literacy, primarily in the West, compared to previous decades. These concerns are raised with the unstated assumption of ceteris paribus, which is obviously somewhat of an issue, because the skills of students in many other subjects, many of which did not even exist a few decades ago, are on an entirely different level regarding the collection and utilisation of available information. Even if we concede that students today could be weaker in mathematics, the fact remains that their particular skills more than make up for it.

Granted, this is an issue that should concern us, but it is not nearly as catastrophic as it is usually made to sound. Compare, for example, the level of students in computational analysis and numerical methods in the 70s with what they are able to achieve today. No comparison. The number of students choosing STEM subjects at university shows a steady increase, although dropout rates, especially in Physics, remain high.


Are all concerns about the development and use of AI to be dismissed as mere scaremongering? Of course not; that would be naive and dangerous. There are many well-grounded legitimate concerns. On a societal level, we need to think hard about how to regulate the AI industry and ways must be found for professionals to be remunerated when their work has been used to train AI models. How exactly that might work in practice is yet to be determined, but we better start having these discussions now.


Sooner or later every field and every profession will be affected. Moravec’s “Landscape of human competence” gets progressively more flooded as time goes by. Highly creative professions, which until recently were thought out of reach for at least a few more decades, are already feeling the first splashes. There are now AI copilots that can be used to write code and software engineers are already trying them out. They use them with a mix of elation and worry. In my own scientific field, we increasingly rely on machine learning to look for patterns in massive data sets. There is just too much of it, and it would take decades to do it without the use of such tools. 


The steady march forward of AI will inevitably cause major disruptions in how society currently operates, and I strongly suspect that it will make the introduction of some sort of UBI unavoidable. We have a long way to go.


0 comments: