Send in the Cyborg Marines

Early experiments in soldier-machine symbiosis aren't quite Robocop.

Beta testing?

Source: US Navy via Getty Images

The Pentagon's F-35 Joint Strike Fighter jet is the most expensive piece of military hardware in history. Is it also a centaur?

That's the riddle posed by an intriguing and perplexing interview given over the weekend by Robert O. Work, the deputy secretary of defense, at the Reagan National Defense Forum in California. Work was attempting to describe the cutting-edge technology at the heart of what the Pentagon calls its "third offset," a rethinking of deterrence strategy against China and Russia, which have rapidly closed the technology gap enjoyed by the U.S. military since the 1970s.

But what Work was envisioning has implications well beyond warfare: partnership between man and machine to the point where they function as one. “Human-machine collaboration is allowing a machine to help humans make better decisions faster,” Work told Thom Shanker of the New York Times.

This is not to be confused, he pointed out, with artificial intelligence. "There is an AI bias right now generally in the community," Work said. "Automated systems use algorithms based on old data. This assumes we are up against a thinking adversary that is changing strategies all the time. And we will use machines to help our decision makers make better decisions."

Work compared the initiative to the game of "centaur chess," popularized by world champion Garry Kasparov in the 1990s, in which grandmasters team up with supercomputers and take on competing pairs. In this case, however, the supercomputers fire precision-guided bombs.

“The F-35 is not a fighter plane,” Work explained. “It is a flying sensor/computer that sucks in an enormous amount of data, correlates it, analyzes it, and displays it to the pilot on his helmet." Work acknowledged that the plane doesn't turn as well or fly as fast as some of its predecessors, but insisted that is all part of the plan. "We are absolutely confident that the F-35 will be a war-winner," he said. "That is because it is using the machine to make the human make better decisions.“

Well, that's one way to justify hundreds of millions in cost overruns. But is there anything more to it? 

Work's faithful futurism is certainly sincere, but baby steps in the direction of man-machine symbiosis have been wobbly. Spot, a high-tech beast of burden built by Boston Dynamics using Pentagon funds, was field-tested by the Marines last summer after years of development. Don't look for it in combat any time soon. Even at the test stage, the robot was controlled not by the encouraging whistles of the men and women in uniform but by a trailing technician with a PC.


Progress has been equally fitful in the Army's attempt to integrate MQ-1C Grey Eagle drones into units flying Apache attack helicopters in Afghanistan. The idea is that the helicopter crew will eventually control the unmanned planes, making use of their surveillance capabilities and firepower. But the experiment was hampered by communications incompatibilities -- the drones were usually controlled remotely from the ground, not from the cockpit -- and was called off after it was decided the Grey Eagles were in greater need elsewhere.

Frustrations aside, there is much to be said for emphasizing human-machine collaboration as opposed to trying to replace flesh-and-blood fighters. Erik Brynjolfsson and Andrew McAfee, the MIT professors widely seen as the great proselytizers for the next "machine age," would be the first to point out that the inherent shortcomings of mechanical devices include many of the most vital strengths of the human warrior: small-motor dexterity, creativity and social interaction within a team.

Work isn’t letting such mundane concerns hold him back: “I'm telling you right now, 10 years from now if the first person through a breach isn’t a friggin’ robot, shame on us.”

And shame on us, too, if we don't consider the full implications. For those worried that Work's military Robocop will go all Terminator on its creators, the idea that the machines won't be fully autonomous should be (slightly) reassuring. Let's face it, artificially intelligent killing machines are a terrifying prospect, something it didn’t take a sanctimonious letter from Stephen Hawking and Elon Musk to point out. War is always going to require moral and ethical choices, and at least in Work's future there will be a human intelligence, and conscience, making the final call.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

    To contact the author of this story:
    Tobin Harshaw at tharshaw@bloomberg.net

    To contact the editor responsible for this story:
    Jonathan Landman at jlandman4@bloomberg.net

    Before it's here, it's on the Bloomberg Terminal.