This Preschool Is for Robots
On the seventh floor of Berkeley’s technology research hall, a bright blue and yellow plastic ray gun sits on a long table, along with wooden spoons, model planes, and a set of square and round pegs. There’s also a stack of those ultra-large Lego blocks kids go nuts for. The toys are all for a spoiled toddler named Brett, who also happens to be a robot.
Brett, a white and gray humanoid with movable arms, looks like a cross between a toaster and a George Foreman grill. The name is an acronym for the Berkeley Robot for the Elimination of Tedious Tasks, part of an ambitious project within the University of California at Berkeley to develop artificial intelligence that lets machines learn the way humans do. Brett’s mind is still somewhere between the infant and toddler stage, but the machine is picking things up at an astonishing pace.
As part of one experiment, the researchers instruct the robot to twist a bottle cap into a bottle. Moving with the jerky motions of a determined toddler on a sugar rush, Brett makes a series of lunges trying to jam the cap onto the top of the bottle. After a few failed attempts, the robot pauses. “It’s thinking,” says Sergey Levine, a postdoctoral researcher at the lab. Suddenly, Brett lunges with the cap in its hand but misses the top, placing the cap against the outer lip of the bottle. The determined robot proceeds to screw the cap in, which fills the room with the sound of grating plastic. After a few seconds, it admits defeat. Fortunately, Brett lacks the sensitive temperament of a toddler and, undeterred, tries again.
The robot’s otherwise childlike manner isn’t a quirk of its design; it's intentional. Unlike most industrial robots, which are programmed to complete specific tasks, the Brett project teaches robots to learn using methods based partly on the ways young children discover the world. They repeatedly try to solve problems and adjust their behavior each time to get closer to the goal. Pieter Abbeel, who runs the robotics group at Berkeley, says his research has been partially inspired by watching child psychology tapes, which demonstrate how young children constantly adjust their approaches when solving tasks. “This reinforced my belief that learning is the way to go for robotic control,” Abbeel says.
What makes Brett’s brain tick is a combination of two technologies that have each become fundamental to the AI field: deep learning and reinforcement learning. Deep learning helps the robot perceive the world and its mechanical limbs using a technology called a neural network. Reinforcement learning trains the robot to improve its approach to tasks through repeated attempts. Both techniques have been used for many years; the former powers Google and other companies’ image and speech recognition systems, and the latter is used in many factory robots. While combinations of the two have been tried in software before, the two areas have never been fused so tightly into a single robot, according to AI researchers familiar with the Berkeley project. “That’s been the holy grail of robotics,” says Carlos Guestrin, the chief executive officer at AI startup Dato and a professor of machine learning at the University of Washington.
After years of AI and robotics research, Berkeley aims to devise a system with the intelligence and flexibility of Rosie from The Jetsons. The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot. Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people. These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.
But it will take time for the robots to learn and for the researchers to refine their curriculum. “An academic demonstration is very different from what you want to deploy in a real product, where you don't have six Ph.D. students standing around for the demo,” says Rodney Brooks, the chief technology officer at Rethink Robotics. Many AI researchers, however, say giving robots brains based on deep learning is a necessary next step, and it’s expected to transform industrial machines in the same way it enabled incredible strides in computer-vision applications, such as photo apps from Google and others that can recognize faces and buildings, says Andrej Karpathy, a computer science Ph.D. student at Stanford University and a former intern on Google’s AI teams. When faced with uncertainty, today’s factory robots tend to shut down, which is not always the safest way to respond to a problem, says Avner Goren, a general manager at Texas Instruments. Abbeel says robots need to be able to cope with failure better. To demonstrate, a researcher grabs the robot’s hands and pushes them away from the objects it’s trying to interact with. Like a determined infant eying a cracker, the robot pauses for a moment and then starts to move its arms again toward the object.
Berkeley’s researchers aren't the only ones creating kid robots. Last year, Google DeepMind, the search giant’s AI group, developed software that learns to master Atari games without instructions. David Silver, a Google DeepMind researcher, told an audience at a conference this year that the company has started using this learning software to control physical robots. “You want to have a single algorithm that we can just drop in there, like a baby,” Silver said at another conference in June. “You put the baby down in front of some new task, and eventually it kind of plays around with it, figures out how to deal with the toy, and maybe by the end of the day, it’s figured out how to deal with a new piece of its environment that it’s never encountered.” In the past few years, Google has acquired at least seven robotics companies, including two spinoffs of Willow Garage, which made the PR2 robot on which Brett lives. Google spokesman Jason Freidenfelds declined to elaborate on the company’s robotics arm.
The transition to adaptable, learning robots is happening across the industry. Japan’s Fanuc, which makes robots that can assemble products, weld metal, paint walls, and package goods, said on Aug. 20 that it had acquired a stake in the startup Preferred Networks to inject AI into its robots. The startup is also developing software for Toyota Motor and Panasonic, says Justin Clayton, the director of business at Preferred Networks. In November, German robot maker ABB invested in Vicarious, another AI startup. The companies are co-developing a smart robot, says Scott Phoenix, a founder of Vicarious. “ABB could be making way more robots if only someone could make software that works like the human mind,” he says.
Part of why the robotics industry is so interested in the type of AI in development at the Berkeley lab is because, unlike with most emerging technology, it already works really well, says Abbeel. “Everybody who tries something seems to get things to work beyond what they expected,” he says. “Usually it’s the other way around.” The work has piqued the interest of executives at Dyson, Fujitsu, Siemens, Toyota, and several startups, who have visited the lab, Abbeel says.
Because Brett’s AI is based in part on the interworking of the human brain, it can lead to unanticipated results. Over the summer, the Berkeley researchers installed a memory system into the robot. Soon they noticed Brett was inclining its arms slightly to the left or right when told to place an object in one of two containers. The researchers, perplexed, studied the robot's movements closely. Eventually, they realized it was doing the robo-equivalent of counting on its fingers. It developed the habit as a shortcut for doing simple addition and subtraction, using its limbs as memory aids instead of relying on the custom software in its head. “It’s almost uncanny,” says Levine.
After several hours of training, Brett is acing his tests. The robot learns how to use a toy hammer made of wood to hook a nail on a box. The researchers, satisfied with Brett’s progress, try to fool it by slowly moving the box away. The robot has never seen this particular object move before, but it still manages to track the motion, match the velocity, and hook the nail with confidence. Abbeel says that’s because the robot has learned to solve its tasks using the same types of trial-and-error approaches that pro sports players, such as Roger Federer, use to perfect their game. The Swiss tennis champion’s serves are streamlined and elegant, unlike an amateur player's, whose technique is riddled with “quirky motions that happen before they hit the ball.” Federer has practiced enough to know which parts of the preserve are pointless and which are key to a good hit. Brett is designed not just to figure out how to complete a task but also to master it like a professional athlete, Abbeel says.
Toward the end of the day at Berkeley’s robot preschool, Brett begins to handle the water bottle and cap with increasing confidence. The robot periodically changes its approach, adjusting how it holds the bottle, which angle the other arm approaches from, and how it twists the cap on. “Every five steps or so, it thinks about what it has experienced and updates its behavior,” says Levine. Finally, Brett’s arm whirrs, swoops toward the bottle, and in one smooth motion, sets the cap on evenly, screws it on tightly, and then stops. Ace.
Update, 2:10 p.m., Sept. 3: The story has been updated to include additional details about Google's acquisition of Willow Garage.
(Correction: The original version of this story misstated the name of the Google spokesman. He is Jason Jason Freidenfelds, not Jason Friedenfelds.)