Cybersecurity
Viral ChatGPT Spurs Concerns About Propaganda and Hacking Risks
- Hackers may develop phishing emails and their own AI models
- Defenders say chatbot also likely to help them fend off hacks
This article is for subscribers only.
Ever since OpenAI’s viral chatbot was unveiled late last year, detractors have lined up to flag potential misuse of ChatGPT by email scammers, bots, stalkers and hackers.
The latest warning is particularly eye-catching: It comes from OpenAI itself. Two of its policy researchers were among the six authors of a new report that investigates the threat of AI-enabled influence operations. (One of them has since left OpenAI.)