Why Google Wants to Sell Its Robots: Reality Is Hard

Why Is Google Selling Boston Dynamics Robotics Unit?

It’s been a week of extremes for Google’s artificial intelligence efforts, as the company luxuriates in the afterglow of winning a board game tournament against one of the world’s top players, while it privately tries to sell one of its most visible robotics efforts.

Google’s decision to try to shed its Boston Dynamics robotics group highlights a fundamental research problem: software is far easier to develop and test than hardware. That’s especially true when dealing with artificial intelligence and robotics.

Today’s industrial robots tend to be dumb machines, operating on pre-programmed routines, and are housed in metal cages to stop people walking into their zone of movement and potentially getting harmed. With Boston Dynamics, Google was working on machines that could break out of the rigid confines of the factory and perform a broader range of tasks. That requires dealing with a range of unsolved problems, requiring fundamental research.

The challenges were apparent in an internal meeting held by Google’s robotics leaders in November. According to meeting minutes seen by Bloomberg, executives discussed the viability of AI techniques like teaching robots to do physical tasks, and how the Boston Dynamics group needed to collaborate more with other Google teams. They planned to grapple with larger questions as well: the division’s leader, Jonathan Rosenberg, said the company needed "to have a debate on hydraulics." Google declined to comment. At Google, as in the rest of the industry, there is much excitement about the potential for smart machines-- but still a lot of questions about how, exactly, to build them.

On Feb. 23, Boston Dynamics published a video showing off how their robots could stalk, run, walk and stack boxes. Tens of millions of people viewed it, exhilarated over the prospect of what artificial intelligence could accomplish.

But Boston Dynamics’s creations were not quite as advanced as people assumed. The main problem the company had solved was getting its machines to move in a realistic manner, said a person familiar with the company’s technology, but full autonomy is far away. Marc Raibert, the founder of Boston Dynamics, said as much in an interview with IEEE Spectrum in February, when he acknowledged that in the videos, a human steered the robot via radio during its outside strolls. Indoors, though the robot could stack boxes autonomously, someone had to set it up and tell it to start, he said.

A robot can’t decide to go for a walk on its own, said Rodney Brooks, an artificial intelligence pioneer and founder of Rethink Robotics, "It doesn’t have the intent a dog has." (Rethink makes factory robots that don’t need cages, and can detect big changes in their work environment. "Is that scientifically hard? No. People in labs would have done that 20 years ago," said Brooks. "But it’s gotta work 100 percent of the time.")

Giving a machine intention is a difficult challenge. Software programmers can simulate the problem they’re trying to solve on computers, and progress doesn’t depend on physical movement--it’s about how fast a computer can simulate those movements.

Google’s DeepMind AI software played hundreds of thousands of rounds of the board game Go in a matter of months. It would take a lot longer to test drive robots taking hundreds of thousands of walks in the woods.

To develop robots, you have two options: You can either simulate an environment and robot with software and hope the results are accurate enough that you can load it into a machine and watch it walk. Or you can skip the simulation and tinker directly on a robot, hoping you can learn things from the real world-- but that’s awfully slow.

Google faces this problem with its self-driving cars, and it tests them both ways. It has real cars drive a few thousand miles a week on real roads, and at the same time it simulates millions of miles a week driven by virtual cars on virtual roads. The idea is that the simulator can test out different scenarios to see how the cars react, and the real world can give Google data -- and problems -- that virtual cars don’t encounter. One time, a car confronted a man in a wheelchair chasing a turkey with a broom. This was not something Google had simulated.

The problem with robots is that they tend to be more advanced than cars. Instead of wheels, you have legs-- and arms, necks, knee joints, and fingers. Simulating all of that accurately can be extremely difficult, but testing out all the different ways you can move the machine in flesh-and-blood reality takes years.

"Rosie the robot, you can’t have it knock over your furniture a hundred thousand times to learn," said Gary Marcus, chief executive officer of a startup AI company called Geometric Intelligence.

Sergey Levine recently worked on a project to tackle this problem at Google. The company programmed 14 robotic arms to spend 3,000 hours learning to pick up different items, teaching each other as they went. The project was a success, but it took months, and it used robot arms rather than an entire body.

"In order to make AI work in the real world and handle all the diversity and complexity of realistic environments, we will need to think about how to get robots to learn continuously and for a long time, perhaps in cooperation with other robots," said Levine. That’s probably the only way to get robots who can handle the randomness of everyday tasks.

Boston Dynamics’s robots need technology that doesn’t exist yet. The software to control them and give them autonomy is still a research problem being worked on by universities around the world. This is likely why Google thought it would take a decade to develop Boston Dynamics’s technology into a commercial product.

Possible acquirers include the Toyota Research Institute, a division of Toyota Motor Corp., and Amazon.com Inc., which makes robots for its fulfillment centers, according to a person familiar with Google’s plans. Toyota declined to comment, and Amazon didn’t respond to requests for comment.

It’s rare to see a company to build a product that requires such fundamental research in a number of areas, said John Schulman, a researcher with AI group OpenAI. "Having a humanoid robot that goes around and does interesting things in the real world, like maybe cleans up your house, that’s just way beyond the current state of the science."

Before it's here, it's on the Bloomberg Terminal. LEARN MORE