Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say.
There are few ways to know in advance if a particular AI model — a program made up of algorithms that can do such things as generate text, images and predictions — is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence Inc., a machine learning security company that lists the US Defense Department as a client.