The bigger-is-better approach to AI is running out of road
If AI is to keep getting better, it will have to do more with less
When it comes to “large language models” (LLMs) such as GPT—which powers ChatGPT, a popular chatbot made by OpenAI, an American research lab—the clue is in the name. Modern AI systems are powered by vast artificial neural networks, bits of software modelled, very loosely, on biological brains. GPT-3, an LLM released in 2020, was a behemoth. It had 175bn “parameters”, as the simulated connections between those neurons are called. It was trained by having thousands of GPUs (specialised chips that excel at AI work) crunch through hundreds of billions of words of text over the course of several weeks. All that is thought to have cost at least $4.6m.
Explore more
This article appeared in the Science & technology section of the print edition under the headline “Time for a diet”
More from Science & technology
Can you breathe stress away?
Scientists are only beginning to understand the links between the breath and the mind
The Economist’s science and technology internship
We invite applications for the 2025 Richard Casement internship
A better understanding of Huntington’s disease brings hope
Previous research seems to have misinterpreted what is going on
Is obesity a disease?
It wasn’t. But it is now
Volunteers with Down’s syndrome could help find Alzheimer’s drugs
Those with the syndrome have more of a protein implicated in dementia
Should you start lifting weights?
You’ll stay healthier for longer if you’re strong