Science & technology | AI alignment

How to train your large language model

A new technique is speeding up the process

Robot legs wearing running shoes
Illustration: Alberto Miranda

It is no secret that building a large language model (LLM) requires vast amounts of data. In conventional training, an LLM is fed mountains of text, and encouraged to guess each word before it appears. With each prediction, the LLM makes small adjustments to improve its chances of guessing right. The end result is something that has a certain statistical “understanding” of what is proper language and what isn’t.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “AI boot camp”

From the March 16th 2024 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science & technology

A person blowing about a pattern in the shape of a brain

Can you breathe stress away?

It won’t hurt to try. But scientists are only beginning to understand the links between the breath and the mind

The Economist’s science and technology internship

We invite applications for the 2025 Richard Casement internship


A man sits inside a pixelated pink brain while examining a clipboard, with colored squares falling from the brain

A better understanding of Huntington’s disease brings hope

Previous research seems to have misinterpreted what is going on


Is obesity a disease?

It wasn’t. But it is now

Volunteers with Down’s syndrome could help find Alzheimer’s drugs

Those with the syndrome have more of a protein implicated in dementia

Should you start lifting weights?

You’ll stay healthier for longer if you’re strong