Skip to content
Subscriber Only

Microsoft Apologizes After Twitter Chat Bot Experiment Goes Awry

  • Company says Tay was the subject of a `coordinated attack'
  • Users exploited the bot to make it say inappropriate things

Microsoft Corp. apologized after Twitter users exploited its artificial-intelligence chat bot Tay, teaching it to spew racist, sexist and offensive remarks in what the company called a “coordinated attack” that took advantage of a “critical oversight.”

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, corporate vice president at Microsoft Research, said in a blog post Friday.