Nvidia’s AI ‘Guardrails’ Software Aims to Keep Chatbots From Going Rogue

  • Chipmaker has benefited from boom in artificial intelligence
  • New tool attempts to ensure that bots are ‘safe and secure’

Nvidia headquarters in Santa Clara, California.

Photographer: David Paul Morris/Bloomberg
Lock
This article is for subscribers only.

Nvidia Corp., whose powerful chips helped set the stage for the artificial intelligence boom, is now looking to address a major concern surrounding the technology: that AI bots will go rogue and cause harm.

The company is introducing software Tuesday that regulates AI systems based on large language models — the learning technique used by OpenAI’s ChatGPT and other popular bots. The tool, called NeMo Guardrails, can keep chatbots on topic and make them less likely to offer up restricted information. It also will prevent them from guessing wrongly or taking actions outside their purview, Nvidia said.