I'll say it out loud: AI (industry and academia) is going through a rather nasty phase. Yes it's thrilling and inspiring, yes it's nasty.

AI is famous for its winters but we're now deep in the hottest summer ever, and I know there's many of us enthusiasts out there just sweating away in the heat.

The problems include (but aren't limited to):

1. The absurd pace

In 2016, Andrej Karpathy built ArXiv Sanity Preserver, a tool to help identify the critical papers getting published in ML. He must have thought it noisy back then. Then in 2018, ArXiv reached 3000 AI papers published every month. Now it's 2023 and nobody's counting any more. Karpathy himself bemoans that it's 'just a little bit out of control.'

It was once possible to stay on top of all the major ML breakthroughs without vainly reading papers all day (I was probably pre-puberty at that time). As the field gets ever broader, we can be pickier with what we read, but doing so doesn't reduce the feeling of being totally submerged. All AI discoveries are exciting and we want to know about them, but there's simply not enough time. The stack of papers on my desk grows and grows, and requires pruning.

This is what FOMO feels like, of course. And this particular FOMO is exacerbated by the sensation of ignorance you get when you just don't have time to understand one algorithm before you have to start looking at the next. From an academic's perspective, this must be even more stressful: publish now or have your research diminished by a competing paper in two month's time?

Perhaps all blossoming fields are like this. Had better publishing infrastructure and incentives existed during the Golden Age of Aviation, maybe we would see hundreds of thousands of papers and breakthroughs from that time too.

Still, for many people to feel happy in their line of work, it's important that they feel like they have achieved or can achieve some level of mastery. I don't know if I have enough time in the day to get there any more.

Antidote: Curate your taste, so you don't get burnt out trying to read everything. The forward-forward paper by Geoffrey Hinton is probably worth reading, the 670th paper on metrics for machine translation is probably not.

2. The noise is so loud

For me, part of the joy of AI just two years ago was that my family didn't know much about it. I like working on things which are a bit mysterious — perhaps because I use that as an indicator of future importance. It used to be that AI was reserved for geniuses and therefore I could be a genius if I knew about it while others didn't. It was harder to get imposter syndrome back then.

But then ChatGPT arrived, bringing way too many people along to the party. Everyone and their mother has a story about AI now (inevitably one involving poor prompt engineering) and they all have a take they want to share, because everyone has to have a take.

VC interest is also here, splashing about. For example, at least half of the current YC batch are AI-focused. That so many diverse startups are getting funded may tells us that investors don't have a clear guess for what companies will succeed. In other words, nobody knows yet which markets will benefit the most from ML. [NO MAJOR WINNER]

Worst of all are the influencers though. By pumping out inane templated gunk on a daily basis, they grow the hype and make the FOMO feeling even worse. Many of them seem to have learned engagement-optimising tricks from their crypto days. So you end up reading the threads and making an even scarier observation: this 'AI guy' doesn't seem to have as much technical knowledge as me and yet he seems to be doing really well.

Antidote: Stop using the 'For You' tab on Twitter so you only see the content from people you actually find interesting. And notice that everyone else being noisy about AI is a signal that they're likely to be just as confused as you.

3. The bitter lesson

For as long as increasing computation remains the most effective way of pushing the state of the art in ML, progress in the field will feel a bit tacky and unsatisfying. The 'stack more layers' approach seems to brute-force intelligence in a way that totally ignores the more elegant, psychological dimensions of AI research.

What kind of science is this, where blindly throwing more resources at the problem helps to solve it? What an ugly way of making progress! I would have hoped that the best rewards would go to those with the best ideas, not the biggest wallets. [gpu poor]

On one hand, this is a complaint about the aesthetics of ML research. But it has practical sides. Poorly funded academics can hardly compete with the large labs any more and you need only compare the FLOPS of a T4 to 8xH100s to see why. Also, many AI startups are entirely dependent on the API outputs of larger companies, leaving them attached and unable actually to innovate.

4. Engineers wielding massive power

5. Threat of impending death