Game Changer

The Digital Activist Taking Human Prejudice Out of Our Machines

Joy Buolamwini founded the Algorithmic Justice League to make people aware of the bias embedded in our networks.
Illustration: Sam Kerr

When Joy Buolamwini was 9 years old, she saw a TV documentary about Kismet, the MIT-built social robot that could interact face-to-face. To the young would-be scientist, the technology was magic. She was mesmerized and resolved to understand it.

But in 2010, while an undergraduate at the Georgia Institute of Technology, Buolamwini hit an algorithmic obstacle. “For a social robot to socialize with a human, it has to be able to detect that human’s face,” she says. The robot she was experimenting with for class could detect her roommate’s light-skinned face, but not Buolamwini’s. The next year, at a lab in Hong Kong, it happened again. “I thought to myself, You know, I assumed this issue would’ve been solved by now,” she says.

Fast-forward six more years, past degrees from MIT and the University of Oxford, which she attended on a Rhodes scholarship, and a Fulbright scholarship in Zambia. Buolamwini is now the founder of the Algorithmic Justice League (AJL), an initiative at the MIT Media Lab to make people aware of the subtle biases lurking in apparently neutral computerized processes. Today, most artificial intelligence systems need to be trained on large data sets, so if your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize. Programmers often share their work in code libraries, meaning a biased algorithm can easily ping from Atlanta to Hong Kong.

Buolamwini puts a much-needed human face on the problem of machine bias, says Solon Barocas, who runs Fairness, Accountability, and Transparency in Machine Learning, the academic world’s preeminent conference on the topic. “This field can be very ­theoretical, but [Buolamwini] really drives home what it might be like to experience this kind of bias. Companies that are in the business of using these techniques can expect that bias will happen unless they give these issues full attention.”

AJL is in the early stages of developing tools to help coders check for bias in facial recognition, medical diagnostics, big-data analysis, and other systems. A model assessing creditworthiness, for instance, might rate a mortgage applicant based in part on her ZIP code—an objective data point that nevertheless reflects entrenched discrimination. Buolamwini’s November 2016 TEDx Talk has racked up more than 695,000 views since TED posted it in March, and in January she beat out 7,300 applicants to win a Search for Hidden Figures professional grant, created last year by 21st Century Fox (which produced the movie Hidden Figures), PepsiCo Inc., and the New York Academy of Sciences. Her ambitions remain large. “We’re using facial analysis as an exemplar to show how we can include more inclusive training data in the first place,” she says. “And those benchmarks can be broadly applied.”

    Before it's here, it's on the Bloomberg Terminal.
    LEARN MORE