Science & technology | Generative AI

How generative models could go wrong

A big problem is that they are black boxes

In 1960 norbert wiener published a prescient essay. In it, the father of cybernetics worried about a world in which “machines learn” and “develop unforeseen strategies at rates that baffle their programmers.” Such strategies, he thought, might involve actions that those programmers did not “really desire” and were instead “merely colourful imitation[s] of it.” Wiener illustrated his point with the German poet Goethe’s fable, “The Sorcerer’s Apprentice”, in which a trainee magician enchants a broom to fetch water to fill his master’s bath. But the trainee is unable to stop the broom when its task is complete. It eventually brings so much water that it floods the room, having lacked the common sense to know when to stop.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “How generative models could go wrong”

From the April 22nd 2023 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science & technology

A person blowing about a pattern in the shape of a brain

Can you breathe stress away?

It won’t hurt to try. But scientists are only beginning to understand the links between the breath and the mind

The Economist’s science and technology internship

We invite applications for the 2025 Richard Casement internship


A man sits inside a pixelated pink brain while examining a clipboard, with colored squares falling from the brain

A better understanding of Huntington’s disease brings hope

Previous research seems to have misinterpreted what is going on


Is obesity a disease?

It wasn’t. But it is now

Volunteers with Down’s syndrome could help find Alzheimer’s drugs

Those with the syndrome have more of a protein implicated in dementia

Should you start lifting weights?

You’ll stay healthier for longer if you’re strong