IS THE AI INVESTMENT BOOM SUSTAINABLE? On Dotcoms, Tulips, Bubbles & Bots
- Feb 12
- 4 min read
Updated: 5 hours ago
Originally published in late 2023, in our launch issue of The STOCKtake, this analysis remains a relevant framework for 2026 as AI investment continues to track several classic bubble indicators. While the scale has grown, the underlying risks involving disconnected assets and human over-exuberance remain central to current debate.
I have mixed feelings about AI (you can get a sense of SLB's current views in our forthcoming revised White Paper Thought Leadership in the Age of AI) – but my feelings are not the issue, quite, here. What’s been noticeable in AI investment is how few have been the lone voices cautioning against it – not from some sort of ‘moral panic’ premised on a fear of what next gen AI (AGI specifically) may be capable of, but on the actual basis of it concerns around the wisdom of hitching one's investment decisions to heavily on AI.
Of course there are some ethical concerns with AI - right here, right now - that have nothing to do with some future nightmarish sci-fi scenario (although the fact that AI leaders themselves urge regulation suggests there may in fact be something to fear) but which are worth mentioning: issues around copyright when AI ‘trains’ on anything one puts out there; the problem of the personal right to one’s own image; political issues with ‘fake news’, and mass layoffs and writers’ strikes (now currently resolved). These are all very real and current problems. But here I’m interested rather in the pragmatic concern that AI investment may be not merely ‘frothy’ but actively bubblicious, a view aired by relatively few - but including Fund Manager Peter Fitzgerald at Avivas, as interviewed in TrustNet back in 2023 - who reported being wary of AI investment because ‘it reminds me too much of the tech bubble’, whilst Forbes at the same time suggested that ‘the good vibes are drying out’.
Goldman Sachs at the time quite definitively rebuffed such concerns on various grounds – including that AI had already proven its benefits in a way that tech had not - calling it closer to a ‘revolution’ than a bubble. For sure, plenty of AI companies will collapse but that is the nature of the beast. And this belief that AI fundamentally differs from the tech bubble continues to be the mainstream view. In fact there are also extremely lone voices who don’t believe in bubbles at all – to wit, the economist David DeRosa, who attempts to debunk the bubble ‘myth’ in his 2021 tour de force of 'ranty' yet well-researched skepticism Bursting the Bubble (CFA Institute Research Foundation) in which he fiercely refutes almost all famous bubbles as either never having happened at all (Tulip Mania) or as not technically bubbles (the dot.com fad and even the Housing Crisis of 2008) in a broader attempt to save the Efficient Markets Hypothesis from the claws of Behavioural Finance.
Nonetheless, whilst his book is certainly food for thought and whilst, as a writer rather than economist or Fund Manager, I’m also wary of disagreeing with Goldman Sachs (for whom I generally have respect) and indeed the prevalent view, I can’t help but also feel some caution here, at least if ‘classic’ bubble writings are to be believed. All bubbles (if they exist) tend to have a number of key facts in common, namely:
-Availability heuristic (basically, it’s ‘everywhere’ and on people’s minds – AI now certainly is);
-A significant new and/or disruptive technology (tick!);
-International “contagion” - multiple countries would be negatively affected. (DeRosa argues that this is more ‘simultaneousness’ than ‘contagion’; regardless, an AI meltdown would affect multiple countries at the same time which could presumably cause contagion of economic problems that are otherwise intertwined;
-Non-regressive prediction: (basically: things happening . that have never happened before, e.g. the US nationwide fall in house prices. By definition, we have no way of knowing what might happen that has never happened before – especially where the ‘black box’ of AI is concerned, and recall the Pandemic);
-Belief perseverance and confirmation bias (seeing only what you want to see – there is certainly a kind of ‘fandom’ in AI, as there is with Bitcoin);
- Over-availability of credit, over-leverage and wide use of derivatives (DeRosa strongly refutes both of these, for example pointing out that Options were not in fact deployed in Tulip Mania, and that credit availability does not always lead to a bubble – although it might well exacerbate one.
Troublingly, almost all of these criteria are indeed met with AI. In addition, I would be troubled by the broad way in which ‘AI investment’ is being lumped together – B2B AI and B2C AI are surely wildly different 'products'. Consumers may ultimately reject elements of AI in a highly emotive and non-rational fashion if the general news narrative on AI is seen to be problematic for their livelihood (indeed Chat GPT has already seen wavering levels of interest amongst the consumer market). The economic effects of this – or at least to narratives of this sort - might ultimately prove disastrous in any case since, as DeRosa suggests, putative bubbles are little more than the Stock Markets predicting economic disaster. If this is so then, ironically, his no-bubbles theory actually makes a problem sound more likely, not less…and a bubble by any other name surely still bursts all the same.
Whilst it's hard to push back again the mainstream of expertise, the rare voices suggesting caution may well may be worth noting. With ‘belief perseverance and confirmation bias’ a prime feature of a bubble, however, such warnings seem unlikely to be heeded.
