Science & technology | Hazy figuring

AI could accelerate scientific fraud as well as progress

Hallucinations, deepfakes and simple nonsense: there are plenty of risks

A drawing of a hand with six fingers doing the crossed fingers gesture.
Illustration: Mike Haddad

IN A meeting room at the Royal Society in London, several dozen graduate students were recently tasked with outwitting a large language model (LLM), a type of AI designed to hold useful conversations. LLMs are often programmed with guardrails designed to stop them giving replies deemed harmful: instructions on making Semtex in a bathtub, say, or the confident assertion of “facts” that are not actually true.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “Faster nonsense”

From the February 3rd 2024 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

Dr Dorothy Bishop.

Elon Musk is causing problems for the Royal Society

His continued membership has led to a high-profile resignation

Legal Amazon preservation area borders the field for soybean planting.

Deforestation is costing Brazilian farmers millions

Without trees to circulate moisture, the land is getting hotter and drier


Robot mixing at Toyota Research Institute.

Robots can learn new actions faster thanks to AI techniques

They could soon show their moves in settings from car factories to care homes


Scientists are learning why ultra-processed foods are bad for you

A mystery is finally being solved

Scientific publishers are producing more papers than ever

Concerns about some of their business models are building

The two types of human laugh

One is caused by tickling; the other by everything else