Microsoft Creates Tools to Stop Users From Tricking Chatbots
- Company’s Copilot recently generated weird, harmful responses
- Defenses are designed to spot and block suspicious activity
New safety features are being built into Azure AI Studio.
Photographer: Jeenah Moon/BloombergThis article is for subscribers only.
Microsoft Corp. is trying to make it harder for people to trick artificial intelligence chatbots into doing weird things.
New safety features are being built into Azure AI Studio which lets developers build customized AI assistants using their own data, the Redmond, Washington-based company said in a blog post on Thursday.