Of AI bubbles and snake oil salesmen

Note: I originally wanted to call this article “Of AI hype cycles and snake-oil salesmen” but found out that the (Gartner) hype cycle is a branded, very specific five stage graphical representation of adoption of a technology. I’m not sure why it’s a cycle if it’s not cyclical (and Wikipedia agrees), so let’s talk about bubbles.

We’ve had a few AI bubbles in the past 75 odd years. A few years into AI becoming a real field, Herb Simon (one of the pioneers of AI and who, until Geoff Hinton, was the only person to win both a Nobel Prize and a Turing Award), said “Technologically, as I have argued earlier, machines will be capable, within twenty years, of doing any work that a man can do.” He made that claim in 1960 but he also added that computers are too expensive to replace humans because it costs $10,000 to rent a computer and who would spend that much to replace cheap labor. We know how that worked out. The 1950s-1970s were an initial golden age of AI. It’s during this time that neural networks first became popular until Marvin Minksy and Seymour Papert showed that these networks had some major drawbacks. Cue another 20 years of AI work on symbolic reasoning, we were in the 80s, and Expert Systems were going to replace doctors or at least assist them (where have we heard that before?). The late 80s and 90s was another long AI winter. By this time, the drawbacks of the original perceptrons had been solved using hidden layers, the sigmoid activation function, and backpropagation, and neural networks were once again poised to change the world. There was just the small matter of these networks being able to learn anything useful. Cue Nvidia, the GPU revolution, and, voila, deep learning. If you look carefully, there is a nugget of wisdom here - when AI progress stops, add more layers (and more activation functions). It worked in the 80s with the addition of the hidden layer and the sigmoid function. It worked in the 2010s with deep learning and Adam. Who’s to say it won’t work again? I’m sure there is a meme in here somewhere with “MOAR LAYERS” but I’ll leave that for someone more adept. Anyway, so here we are, in 2025, in the middle of an AI boom. People are looking at the amount of money AI companies are raising and asking a reasonable question - are we in another bubble?

But what specifically characterizes a bubble? Unsurprisingly, there isn’t established consensus about bubbles, and economists disagree on whether bubbles are even real. One natural way to think about bubbles is that it is a rapid escalation of market value that is disconnected from actual effects. But, hear me out, what if it’s the rapid de-escalation that matters. i.e., if a bubble doesn’t burst, is it even a bubble? So, the real question is whether this bubble, hype cycle, whatever you want to call it, is going to grow, plateau, or burst? And, continuing this line of thought, does it really matter if this bubble bursts? Because we’ve had previous AI bubbles burst and everything’s fine. Sure, if you ask the connectionists from the 70s and 80s, they’ll probably tell you that the first bubble and the subsequent AI winter (which led to the funding of symbolic systems) set neural networks back by a couple of decades. But, realistically, neural networks didn’t take off till GPUs made deep learning practical and that was only going to happen because GPUs were built for gaming. The worst that happened was that investments in AI research were wasted and then funding got diverted to other areas, some of which didn’t pan out, which is what usually happens with investments in research anyway. Then, there was the whole cohort of students who worked in AI whose careers never took off or they had to refocus on something else. Again, in the larger affecting the US/world economy sense, this bubble bursting didn’t really matter. All of this was true for the second AI bubble burst.

But,maybe, past AI bubbles aren’t the right historically relevant event. Maybe the 2000 dot com bust makes more sense. The similarities are there - a young technologically nascent industry with a lot of promise to change the world leading to investors throwing money at it. We know how the dot com boom ended. Eventually, an external trigger (9/11) combined with the realization that this new technology doesn’t seem to be doing much lead to a precipitous drawback of capital and the destruction of a lot of market value. But the people most affected by all of this were the investors and the people who worked in this industry. The stock market was down for a few years and then clawed its way back up. The industry roared back to life and most anyone who worked in tech in 2000 would have found the next two decades an amazing time to work in tech. The non-tech people who were really affected were those on the cusp of retirement who watched their portfolio values track down. But, perhaps the lasting legacy of the dot com bust was that the techniques employed to save the economy might have led to the 2008 recession but I’m not an economist so I can only speculate. As an aside, I had a front row seat to all of this btw. In late 2001 I was just entering my second year as a grad student. I watched many of my colleagues (and even some professors) leave to join startups. Each one had 7 or 8 offers in hand. By 2002, a few had come back to grad school and the classes graduating that year had far fewer offers.

We could look a little closer in time to the crypto hype (it could be a bubble I guess but not sure it has burst yet) but even there the industry has had its ups and downs without affecting most of the country. So, once again, does it matter if this AI bubble bursts? If you are a VC throwing gobsmacking amounts of money at anything with the words AI, generative, LLM, agents, and/or machine learning in the name, then yeah, it matters. It also probably matters to the institutional investors and or pension funds who are giving over their money to these VCs to invest. If you are a student or engineer pivoting to focus your career on AI, it’s probably going to cost you a few years of your life pivoting to something else. All of these things that have happened before. The only difference is scale - much more money is invested in AI now and there are way more engineers and research in the field than in previous AI bubbles but, probably, in line with what we saw with the computer industry in 2000. Save a little extra for retirement and you should be fine this time around too, right?

Unfortunately, there are worse scenarios. With previous AI bubbles and the dot-com bust, the technology itself wasn’t in widespread use, at least not enough to affect regular people going about their life. That’s no longer true - obviously for technology in general but also true for AI. And I don’t say that because OpenAI spent $14 million on a superbowl ad or that your grandma uses ChatGPT to speak teen lingo. We just have to look at our barometers for tech adoption - porn and fraud. In both cases we are seeing the effects of AI technology being weaponized for profit - from sites that create porn from a single uploaded image to audio and video deepfakes that result in millions of dollars stolen. This market penetration is a vindication of this technology - we’re reaching the adoption part of the cycle, and all those engineers who are pivoting to AI are probably making the right move. But it’s not so good in many other ways. We have a scenario now where there are billions of dollars of investor money that’s supporting technology whose primary and most effective use is in defrauding people. We’ve seen this over and over again, from crypto rug pulls to NFTs to now AGI. Which makes me think that the technology hype cycle needs an update for the modern era - two extra stages: a sort of proto-adoption phase where the primary users of this technology are the fraudsters and the porn industry and a second trough of disillusionment waiting around the corner because fraud and porn has a high tolerance for edge cases and side effects that normal adopters don’t.

But the real thing that makes this coming second trough of disillusionment worse is that people are going to adopt AI, give it control of things that affect people’s lives (people who have nothing to do with AI) and it’s going to end up causing chaos. This chaos is not guaranteed but it’s a real possibility. We are already seeing some of it whether it is misinformation campaigns or people committing suicide or photos that are whitened. We are starting to anthropomorphize this incredibly effective “next token prediction” engine and think that it is thoughtful and caring and rational. But it’s just cleverly rewording what is already in its data. And there is credible reason to believe that state actors are trying to flood the internet with propaganda articles precisely to get these hoovered up for training LLMs.

LLMs - what are they really good for?

The problem with next token prediction as the basis for reasoning is that you reflect what is in your data. But LLMs can use it to write and sound incredibly polished and that gives what the LLM produces a sort of gravitas. So when experts claim that AGI is around the corner, people believe them. But LLMs aren’t really reasoning machines, they are actually automation machines. Look at all the extra, boring work you do all day. The semi-repetitive steps that require less thinking and more of just doing. LLMs can automate all of that away. How about that museum-quality code that no one has touched in 5 years? You could probably get a better handle on what it’s doing with an LLM’s help. Not enough to rewrite it in 2 months but enough to at least understand what it’s doing. That’s going to disrupt things. But it isn’t sexy enough or disruptive enough (but, ironically, perhaps actually valuable enough) as saying AGI is coming. Hence the snake oil - why sell products that make your engineers more productive (that’s so 2010s) when you can sell products that purport to eliminate engineers altogether. Somewhere out there is a company that might just become the Theranos of AI. A tech that works on the face of it but doesn’t do what it promised investors it would. AI but it’s Actually Indians not Artificial Intelligence.

Other things

OpenAI released a new image model (but it still can’t draw a clock face set to a specified time or an image of someone writing left-handed among other things), but it’s really good at copying image styles which has led to, completely appropriate, outrage. One can probably make the (tongue-in-cheek case) that LLM development is actually communist - from each according to his ability (Studio Ghibli) to each according to his need (one’s desire to create Studio Ghibli styled memes)

Google released their Gemini 2.0 Flash with native image generation (but it still can’t draw a clock face set to a specified time or an image of someone writing left-handed among other things).

LLM Usage: Every single one of the 2000 or so words in this article was written by me without the aid of any LLMs. I then passed it through Claude to find any typos or grammatical mistakes.

Disclaimer: The views expressed in this article are my own and do not necessarily represent the views of my employer.

© 2025 Unmesh Kurup

Bluesky GitHub