In recent months, certain political decisions have triggered a series of crashes in the financial markets: tariffs imposed by Donald Trump have caused share prices to fall in both the United States and Europe; bond issuances aimed at funding military spending in major countries such as Germany have driven up government bond yields and depressed their prices. Only one force is pushing in the opposite direction: the rise of artificial intelligence. This piece was published in March, before the newly elected Pope chose the name Leo, seeing in AI the dawn of a new revolution.
Beyond geopolitics, the single most decisive factor in explaining the behaviour of stock markets over the past two to three years has been investors’ bets on the potential of artificial intelligence—on the speed of its proliferation and, consequently, the profit prospects for companies offering AI-based services or selling the chips required to train such algorithms.
Yet our obsession with the short term has caused us to lose sight of a broader question: while OpenAI, Elon Musk, Anthropic and DeepSeek pursue the development of artificial general intelligence (AGI), meant to resemble the human mind, machines have already surpassed us in many specific tasks.
As recently as 2016, it made global headlines when AlphaGo’s AI defeated human champion Lee Sedol in the complex board game Go—a sort of chess analogue. Today, it would be impossible for a human to win, as the algorithm has continued to evolve.
Nello Cristianini, Professor of Artificial Intelligence at the University of Bath and contributor to Appunti, has chronicled the rise of this new generation of AIs in his books La Scorciatoia and Macchina Sapiens (both published by Il Mulino). He now releases Sovrumano. Oltre i limiti della nostra intelligenza (Il Mulino). Interview follows.
In which areas has artificial intelligence already outperformed human capabilities? How is it evolving?
Human beings consider themselves capable of tackling a broad range of problems—from translating Latin to making medical diagnoses and even inventing new drugs. The same intelligence can handle varied tasks: this is what is now referred to as “artificial general intelligence (AGI).”
At present, two main approaches are being pursued to achieve this hypothetical AGI. The first is known as the scaling hypothesis: the notion that increasing the size of current models will enhance the machine’s general intelligence. This hypothesis enjoys considerable backing, with major corporations investing billions of dollars on the belief that simply scaling up, without changing the model’s architecture, will yield significant results.
The second approach, only recently explored, concerns reasoning—the machine’s ability to formulate genuine logical arguments. Both pathways may ultimately prove effective, possibly in combination. This is not a matter of opinion or speculation; it is a matter of engineering.
How can we determine if a machine is more intelligent than we are?
If we intend to build increasingly intelligent machines, we need a way to measure their progress. If the goal is to match a human skill, we must be able to pinpoint the moment at which this occurs. There is, in fact, a science dedicated to measuring intelligent machines—a kind of “machine psychometrics”—in which AI systems are subjected to sophisticated batteries of tests and their performance assessed.
With each new generation, machines improve. We can administer the same tests to humans to identify when machines catch up or pull ahead. This isn’t about opinions but about objective, repeatable measurements.
Companies are actively competing in this arena, the tests are becoming increasingly challenging, and the machines ever more capable—closing in on human-level performance. It is a trend we can no longer ignore.
Where do things currently stand?
Major tech companies have already outlined a sort of “roadmap” to reach artificial general intelligence. OpenAI, the developers of GPT, envision five phases:
We are currently in the third stage: that of the “agents.”
What are the implications of AI already being ‘superhuman’ in certain fields?
We must acknowledge that in some domains we’ve already been surpassed, and in others, we soon could be. This should be the basis for our discourse.
Beyond specialised over takings (such as in chess), generalist machines—like GPT-4 or Claude 3.7—achieve superior results in various areas when subjected to rigorous testing.
For instance, in recent assessments in mathematics and programming, they often outperform the average human—and at times even surpass highly skilled individuals. In certain cases, they exceed the level of top specialists, with at least a 10% advantage over the general population now well established. They can pass university exams, make medical diagnoses, and translate fluently into around 200 languages. In many areas, they match, equal, or exceed our abilities.
The practical consequences are already visible. The translation sector, for example, was among the first to feel the impact: until a few years ago, a translator specialising in rare languages (such as Lithuanian or Portuguese) could earn a good income. Today, however, any combination of 200 languages is handled effortlessly and at no cost by machines.
Radiologists may be next—machines already interpret some scans more accurately than humans—as might taxi drivers, journalists, teachers, and even doctors.
This is a world evolving at high speed, and we must be prepared. We also need to make AI more environmentally sustainable, with reduced energy costs and greater compliance with regulations.
In future, we’ll also need to move beyond the current dependency on text: at present, machines primarily read the web in textual form, but eventually, they’ll need to generate their own data from the real world—via robots or cars—and at that point, we will no longer be able to “see” everything they learn. Research is far from over.
When it comes to work, the sense of being overtaken that translators feel today—realising that machines now do their jobs more efficiently—may soon be shared by radiologists, software engineers, physicians, educators, and journalists, across various contexts and applications.
What role remains for human beings?
This interests me both philosophically and practically. Philosophically, this is the moment to pause and reflect: we must identify those things machines will never be able to do. I jokingly—but not entirely—refer to this as “the residue”: what remains once the machine has taken everything it can. That residue is what is uniquely human. And it works both ways: there are things machines can do that we cannot—this too is a kind of “residue.”
We will discover these territories as they gradually become more inaccessible, because the machine will push further and further until it crosses a frontier and enters a world incomprehensible to us. Yet, on our side, there are also things we can do that machines never will. Let us identify them now, because this is where our identity, our jobs, and our competitive edge lie. I’m not suggesting that answering this humanistic question will be easy—but we must address it now. Let us not make the mistake of brushing it aside by trotting out the tired narrative that we are “unbeatable” for some vague, unexplained reason. If we know the reason, let’s explain it—because the data are telling us a rather different story.
Are we living through a new industrial revolution?
There have been moments in history akin to this one, where a major technological innovation—like the steam engine—brought profound change. It is true that the advent of the steam engine was disruptive, but people eventually adapted. One could expect the same will happen now.
However, let’s remember the steam engine had far-reaching consequences: it kick-started a certain kind of industry, brought labour into the cities, shifted populations from rural areas to urban centres, and created a proletariat that had not previously existed—eventually giving rise to socialism and revolutions. A technical innovation altered the course of history, and I believe it could happen again.
This is the kind of innovation that can transform the world. Bear in mind that “superhuman” may sound dramatic, but it merely refers to what surpasses the limits of human beings. It is not supernatural, nor magical: a superhuman machine is simply one that has capabilities greater than mine.
In the context of intelligence, that would mean a machine more intelligent than a human being—and we must approach this with the calm detachment of scientists: how it is measured, how it is detected, how it is built, how it is controlled, and how it is contained.
That, I believe, is the mission for researchers in 2025.
Leave a comment