AI Chatbots Not Ready for Election Prime Time, Study Shows

  • Questions to leading chatbots gave often inaccurate answers
  • Tech companies, startups are working to establish safeguards

The results found that just over half of the answers given by all of the models were inaccurate and 40% were harmful. 

Photographer: Nicolas Maeterlinck/AFP/Getty Images
Lock
This article is for subscribers only.

In a year when more than 50 countries are holding national elections, a new study shows the risks posed by the rise of artificial intelligence chatbots in disseminating false, misleading or harmful information to voters.

The AI Democracy Projects, which brought together more than 40 experts, including US state and local election officials, journalists — including one from Bloomberg News — and AI experts, built a software portal to query the five major AI large language models: Open AI’s GPT-4, Alphabet Inc.’s Gemini, Anthropic’s Claude, Meta Platforms Inc.’s Llama 2 and Mistral AI’s Mixtral. It developed questions that voters might ask around election-related topics and rated 130 responses for bias, inaccuracy, incompleteness and harm.