Microsoft Takes AI Bot ‘Tay’ Offline After Offensive Remarks

  • Tay was designed as an experiment to engage with real humans
  • Users exploited the bot to make it say inappropriate things
Photographer: Enamul Hoque/Getty Images
Lock
This article is for subscribers only.

Microsoft Corp. is in damage control mode after Twitter users exploited its new artificial intelligence chat bot, teaching it to spew racist, sexist and offensive remarks.

The company introduced Tay earlier this week to chat with real humans on Twitter and other messaging platforms. The bot learns by parroting comments and then generating its own answers and statements based on all of its interactions. It was supposed to emulate the casual speech of a stereotypical millennial. The Internet took advantage and quickly tried to see how far it could push Tay.