Demographic skews in training data create algorithmic errors
Women and people of colour are underrepresented and depicted with stereotypes
ALGORITHMIC BIAS is often described as a thorny technical problem. Machine-learning models can respond to almost any pattern—including ones that reflect discrimination. Their designers can explicitly prevent such tools from consuming certain types of information, such as race or sex. Nonetheless, the use of related variables, like someone’s address, can still cause models to perpetuate disadvantage.
This article appeared in the Graphic detail section of the print edition under the headline “Bias in, bias out”
More from Graphic detail
A short history of Syria, in maps
The most influential people, groups and events that shaped Syria’s role in the Middle East
Is Javier Milei’s economic gamble working?
Inflation has plunged in Argentina, but some vital goods have soared in price
How to make sense of 2024’s wild temperatures
Our climate team highlight four charts and two maps
What New York’s congestion charge could teach the rest of America
Lighter traffic in some parts of the city is a promising start. Will it continue?
The secret to one of Europe’s best-performing stockmarkets
Its economy is mired in gloom, but its stock exchange is the envy of Europe
Drones spotted on America’s east coast highlight a bigger problem
Unidentified objects can be dangerous, but not in the ways you might think