AI models make stuff up. How can hallucinations be controlled?
It is hard to do so without also limiting models’ power
It is an increasingly familiar experience. A request for help to a large language model (LLM) such as OpenAI’s ChatGPT is promptly met by a response that is confident, coherent and just plain wrong. In an AI model, such tendencies are usually described as hallucinations. A more informal word exists, however: these are the qualities of a great bullshitter.
Explore more
This article appeared in the Science & technology section of the print edition under the headline “Silicon dreamin’”
Discover more
Deforestation is costing Brazilian farmers millions
Without trees to circulate moisture, the land is getting hotter and drier
Robots can learn new actions faster thanks to AI techniques
They could soon show their moves in settings from car factories to care homes
Scientists are learning why ultra-processed foods are bad for you
A mystery is finally being solved
Scientific publishers are producing more papers than ever
Concerns about some of their business models are building
The two types of human laugh
One is caused by tickling; the other by everything else
Scientists are building a catalogue of every type of cell in our bodies
It has thus far shed light on everything from organ formation to the causes of inflammation