Skip to content

Microsoft Scraps Some AI Facial-Analysis Tools, Citing Risk of Bias

Software giant will stop selling programs that infer emotional state, gender and age

A Microsoft Campus Ahead Of Earnings Figures
Photographer: David Paul Morris/Bloomberg

Microsoft Corp. will stop selling artificial intelligence-based facial-analysis software tools that infer a subject’s emotional state, gender, age, mood and other personal attributes after the algorithms were shown to exhibit problematic bias and inaccuracies.

Existing customers of the tools can keep using them for a year before they expire. The company is also limiting the use of other facial-recognition programs to ensure the technologies meet Microsoft’s ethical AI guidelines. New customers will need to apply for access to facial-recognition features in Microsoft’s Azure Face API, Computer Vision and Video Indexer, while current customers have a year to apply for continued access. The changes were outlined with the release of the second update to Microsoft’s Responsible AI Standard, in blogs written by Chief Responsible AI Officer Natasha Crampton and Azure AI Product Manager Sarah Bird.