Author: Dr Filip Biały | filip.bialy@faculty.opit.com
For a long time now, hypes and bubbles have been part of the techno-economic cycles of capitalism. They occur whenever there is a product, service, or technology from which many people – both big investors and regular users – hope to benefit, either in purely financial terms or by believing in its life-transforming potential. The hype over AI is just another instance of such a collective preoccupation with a particular technology that promises a brave new world. The sheer scale of investments – and, recently, the circular nature of those investments – combined with the fact that AI has not yet delivered on many of the technologists’ prophecies, makes people cautious. Recently, the Bank of England issued a warning that the AI market may soon undergo a “sharp correction”. In simple terms, there are worries that the unstoppable overvaluation of technological companies will meet the unmovable fact of the unavailability of resources and infrastructure necessary to sustain the investments in AI.
Certainly, there are signs that the material conditions might be insufficient for markets to maintain their current trajectory. At the same time, we may list several factors that would not only sustain but even increase investments.
First, there is fierce competition between technology companies over AI. It translates directly into the number of chips purchased and data centres built. That, in turn, has become the most visible marker of progress for venture capitalists, who may otherwise be ignorant about the technology. Paradoxically, this makes companies less willing to research more cost- and power-efficient models, as such work might reduce the need for infrastructural expansion and thus confuse investors.
Second, AI has become part of geopolitics, with the United States and Chinese governments both supporting their domestic AI companies in the hope of using the technology as a means of achieving global domination – not just in economic but also in military terms. AI has been transforming modern warfare, with the wars in Ukraine and in Gaza being testing grounds for AI-powered solutions produced by the very same people who are responsible for hyping up AI in other domains.

Third, AI has become a focal point of the global economic system. It is precisely in this sense that it has become “the new oil”: it builds vast fortunes for technology companies and their chief executives, just as actual oil did during the first Gilded Age in the late nineteenth century and for most of the twentieth century. Unless some new “big thing” arrives and replaces AI, it will uphold its function as a source of profit, especially for those who have already made big fortunes out of it.
Fourth, we should not underestimate the determination of contemporary “robber barons”, such as Elon Musk or Sam Altman, who are motivated not only by economic reasons. They appear genuinely to subscribe to the tenets of Promethean, transhumanist philosophy, which perceives the creation of artificial general intelligence (AGI) as the ultimate solution to human problems and as a step towards the space expansion of the human race.
I would not claim that these reasons guarantee the endless continuation of AI investments, as there is always an end to any hype cycle. But I would rather try to look at the current wave of scepticism over AI as a healthy sign that we are starting to understand the actual possibilities and limitations of that technology.
A recent, widely publicised MIT report claims that the vast majority of generative AI pilot programmes fail to implement AI in delivering services. The report shows that, while the technology itself may not be to blame, the way in which it is being deployed leads to a high rate of failure. One major reason is that companies invest resources in visible use cases, while less spectacular but potentially more transformative implementations remain underinvested. An AI enthusiast would argue that this is a necessary learning curve for business leaders and technologists alike, leading towards an AI landscape in which a proper understanding of AI’s potential produces significant improvements in business processes. In this context, the overblown role of generative AI may diminish, as it is often the now “old-fashioned” discriminative AI models that hold the key to real transformation.
It is important to stress that many AI experts – such as Yann LeCun – do not believe that large language models (LLMs) will lead to AGI, citing hard limitations such as insufficient new, high-quality data and the fact that LLMs do not possess an understanding of the physical world. That seems to be the actual hard limitation, and it is the failure of addressing it that may lead to the bursting of the AI bubble.
Hypes and bubbles occur to a large degree in the sphere of our imagination. An academic discourse analyst would say that they are social constructs that have very real consequences. It also means that the way we describe our predicament may influence the way in which decisions over investments in AI are made. That was the case with characterising DeepSeek’s R1 model as a “Sputnik moment”, leading to a momentary crash in Nvidia’s stocks.
Thus, we should remain cautious in pronouncing an imminent burst of the AI bubble (unless, for some reason, we hope for it). I would rather see a more mature understanding of AI, which hopefully will not require the bubble to burst. This does not mean, however, that we will not continue to be subjected to AI hype from technologists and their followers. But we may eventually become better at telling the difference between their claims and reality.
Press Office:
Test Associates Ltd | hello@testassociates.co.uk



