, Columnist
AI’s Next Big Thing Is ‘Fake’ Data
More tech firms are using synthetic images to train their AI to be fairer. Get ready for a world awash in artificial identities.
Real but limited.
Photographer: Smith Collection/Gado/Archive PhotosThis article is for subscribers only.
Last week Microsoft Corp. said it would stop selling software that guesses a person’s mood by looking at their face. The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.
But there’s another, novel approach that tech firms are exploring: training AI on “synthetic” images to make it less biased.
