AI could accelerate scientific fraud as well as progress
Hallucinations, deepfakes and simple nonsense: there are plenty of risks
IN A meeting room at the Royal Society in London, several dozen graduate students were recently tasked with outwitting a large language model (LLM), a type of AI designed to hold useful conversations. LLMs are often programmed with guardrails designed to stop them giving replies deemed harmful: instructions on making Semtex in a bathtub, say, or the confident assertion of “facts” that are not actually true.
Explore more
This article appeared in the Science & technology section of the print edition under the headline “Faster nonsense”
Discover more
Elon Musk is causing problems for the Royal Society
His continued membership has led to a high-profile resignation
Deforestation is costing Brazilian farmers millions
Without trees to circulate moisture, the land is getting hotter and drier
Robots can learn new actions faster thanks to AI techniques
They could soon show their moves in settings from car factories to care homes
Scientists are learning why ultra-processed foods are bad for you
A mystery is finally being solved
Scientific publishers are producing more papers than ever
Concerns about some of their business models are building
The two types of human laugh
One is caused by tickling; the other by everything else