Science & technology | Silicon dreamin’

AI models make stuff up. How can hallucinations be controlled?

It is hard to do so without also limiting models’ power

An illustration showing a laptop with a confused face repeated in a colourful spiral.
Illustration: Shira Inbar

It is an increasingly familiar experience. A request for help to a large language model (LLM) such as OpenAI’s ChatGPT is promptly met by a response that is confident, coherent and just plain wrong. In an AI model, such tendencies are usually described as hallucinations. A more informal word exists, however: these are the qualities of a great bullshitter.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “Silicon dreamin’”

From the March 2nd 2024 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

Legal Amazon preservation area borders the field for soybean planting.

Deforestation is costing Brazilian farmers millions

Without trees to circulate moisture, the land is getting hotter and drier

Robot slicing a cucumber at Toyota Research Institute.

Robots can learn new actions faster thanks to AI techniques

They could soon show their moves in settings from car factories to care homes



Scientific publishers are producing more papers than ever

Concerns about some of their business models are building

The two types of human laugh

One is caused by tickling; the other by everything else

Scientists are building a catalogue of every type of cell in our bodies

It has thus far shed light on everything from organ formation to the causes of inflammation