Regulators are focusing on real AI risks over theoretical ones. Good
Rules on safety may one day be needed. But not yet
“I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000, the murderous computer in “2001: A Space Odyssey” is one of many examples in science fiction of an artificial intelligence (AI) that outwits its human creators with deadly consequences. Recent progress in AI, notably the release of ChatGPT, has pushed the question of “existential risk” up the international agenda. In March 2023 a host of tech luminaries, including Elon Musk, called for a pause of at least six months in the development of AI over safety concerns. At an AI-safety summit in Britain last autumn, politicians and boffins discussed how best to regulate this potentially dangerous technology.
Explore more
This article appeared in the Leaders section of the print edition under the headline “Reality check ”
Discover more
Lessons from the failure of Northvolt
Governments blew billions on a battery champion. Time to welcome foreign investors instead
How to make a success of peace talks with Vladimir Putin
The key is robust security guarantees for Ukrainians
Javier Milei: “My contempt for the state is infinite”
Argentina’s president is idolised by the Trumpian right. They should get to know him better
Tariff threats will do harm, even if Donald Trump does not impose them
The risk of a trade war is uncomfortably high
Peace in Lebanon is just a start
Donald Trump must build on Joe Biden’s belated success
From Nixon to China, to Trump to Tehran
Iran is weak. For America’s next president that creates an opportunity