Parmy Olson, Columnist

Amazon’s AI Coding Revealed a Dirty Little Secret

The shopping behemoth is the latest firm to fall prey to a software security problem that’s flying under the radar.

Using AI to generate software code can leave companies vulnerable to hacking, as Amazon recently revealed.

Photographer: Michael Nagle/Bloomberg

Coders who use artificial intelligence to help them write software are facing a growing problem, and Amazon.com Inc. is the latest company to fall victim. A hacker was recently able to infiltrate a plugin for an Amazon generative AI assistant1 after obtaining stolen credentials and making unauthorized changes, including secretly instructing it to delete files from the computers it was used on. The incident points to a gaping hole in the security practices of AI coding tools that has gone largely unnoticed in the race to capitalize on the technology.

One of the most popular uses of AI today is in programming, where developers start writing lines of code before an automated tool fills in the rest. Coders can save hours of time debugging and Googling solutions. Startups Replit, Lovable and Figma, have reached valuations of $1.2 billion, $1.8 billion and $12.5 billion respectively, according to market intelligence firm Pitchbook, by selling tools designed to generate code, and they’re often built on pre-existing models such as OpenAI’s ChatGPT or Anthropic’s Claude. Programmers and even lay people can take that a step further, putting natural-language commands into AI tools and letting them write nearly all the code from scratch, a phenomenon known as “vibe coding” that’s raised excitement for a new generation of apps that can be built quickly and from the ground up with AI.