Imagine a bank robbery that is organized through emails and texts. Would the email providers or phone manufacturers be held responsible? Of course not. Any punishment or penalties would be meted out to the criminals.
Now consider a harmful outcome brought about by AI: an act of libel, some bad advice that results in financial loss, perhaps instructions for a destructive cyberattack. Should the company that built the AI be held liable? Thoughtful people such as Ezra Klein and Luke Muelhauser argue that it should be, at least partially, as a way to encourage those companies to build safer services.