Science & technology | Anything that can’t continue, won’t

The bigger-is-better approach to AI is running out of road

If AI is to keep getting better, it will have to do more with less

A spoon holding an atomic structure
Image: Mike Haddad

When it comes to “large language models” (LLMs) such as GPT—which powers ChatGPT, a popular chatbot made by OpenAI, an American research lab—the clue is in the name. Modern AI systems are powered by vast artificial neural networks, bits of software modelled, very loosely, on biological brains. GPT-3, an LLM released in 2020, was a behemoth. It had 175bn “parameters”, as the simulated connections between those neurons are called. It was trained by having thousands of GPUs (specialised chips that excel at AI work) crunch through hundreds of billions of words of text over the course of several weeks. All that is thought to have cost at least $4.6m.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “Time for a diet”

From the June 24th 2023 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science & technology

A person blowing about a pattern in the shape of a brain

Can you breathe stress away?

Scientists are only beginning to understand the links between the breath and the mind

The Economist’s science and technology internship

We invite applications for the 2025 Richard Casement internship


A man sits inside a pixelated pink brain while examining a clipboard, with colored squares falling from the brain

A better understanding of Huntington’s disease brings hope

Previous research seems to have misinterpreted what is going on


Is obesity a disease?

It wasn’t. But it is now

Volunteers with Down’s syndrome could help find Alzheimer’s drugs

Those with the syndrome have more of a protein implicated in dementia

Should you start lifting weights?

You’ll stay healthier for longer if you’re strong