Marketing professionals have learned the hard way that no matter what they do or do not plan to do with consumer information, privacy matters. In part, that's because marketing has always been something of a black art. When an ad appears to speak to a consumer directly, of course, it's likely to be most effective. But that's also the moment when the creepy response kicks in. How did they know what I wanted, perhaps even before I did?
Couple the lack of transparency of marketing generally with the shock of new technology, and you get anxiety over information use that increasingly translates into calls for legislation or regulatory intervention.
New laws aimed at specific technologies, however, are the worst possible outcome. Legal solutions are by their nature blunt instruments for managing uncertainty. At best, they add significant enforcement costs without solving the problem. Spam e-mails were made illegal by a 2003 federal law, but all that law has done is to provide lifetime employment for lawyers at the Federal Trade Commission.
At worst, special laws simply ban the new thing before its developers have the chance to test it in the market and make adjustments. In the U.S., for example, advocacy groups initially demanded that Congress outlaw Google's Gmail when they learned the service would be paid for with contextual advertising that "reads" the content of user messages.
FCC Commissioner Ajit Pai recently referred to such efforts as "Red Flag Laws," an allusion to legislation passed in response to the initial panic over an earlier disruptive technology: the automobile. He writes:
The temptation to overregulate new technologies is strong. It's also misguided. Today, everyone would agree that it would be absurd for the government to require an automobile to be preceded by a person carrying a red flag to warn people that a car was coming. Or worse, imagine if regulators required motorists to stop, disassemble their vehicle, and conceal the parts in bushes if the car frightened a passing horse. The first actually happened at the dawn of the automobile age — they were called Red Flag Laws — and the second nearly happened, passing the Pennsylvania state legislature unanimously, only to be stopped by the Governor's veto.
In retrospect, of course, Red Flag Laws always look ridiculous. But in the heat generated by torch-wielding mobs, the absurdity of calls to do something — anything — to stop the march of progress aren't always so easy to counter. Consider what Techdirt's Mike Masnick has called a "moral panic" over the release in a year or so of Google Glass, a head-mounted computing device that projects information onto a tiny display positioned in front of one eye.
Glass will also be voice activated, capable of performing many basic computing functions (sending messages, looking up information), and of recording and sharing audio and video. The product is about to enter a controlled beta release to some 8,000 early users, or what the company is calling "explorers."
In a literal sense, Google Glass is nothing new. Head-mounted displays have been around for decades, initially designed for military and advanced simulation applications but now cost-effective for consumers. At this year's Consumer Electronics Show, I saw perhaps a dozen companies offering such devices, pitched for the convenience of hands-free computing, as aids to those with disabilities, or for high-end immersive gaming.
Nor are any of the functions performed by Glass especially novel. The device will simply mimic some of what billions of us can already do with a smartphone. Except that you wear it on your head rather than holding it in your hands. As many of us also already do with Bluetooth headsets.
There is, I suppose, one difference that's worth mentioning. The product will be made and sold by Google. On the one hand, that seriously ups its coolness factor, making Glass a "must have" for the technorati. On the other hand, it also increases the anxiety level of those already uncomfortable with the company, and with smartphones and other mobile devices that can record audio and video more or less without notice.
The Red Flags are flying high. A White House petition asks the Obama Administration to "ban Google Glass from use in the USA until clear limitations are placed to prevent indecent public surveillance." (So far, 38 of 100,000 required signatures have been collected.) One site, Stop the Cyborgs, already offers downloadable signs businesses are encouraged to display announcing that "Google Glass is Banned on these Premises." They also sell t-shirts, though one customer complained that the material used was so thin as to be transparent, an unfortunate irony.
Lawmakers are eager to get in on the fun. A West Virginia legislator, after reading a short article about the product, immediately introduced a bill that would prohibit driving while "using a wearable computer with head mounted display." And last week, eight members of a bi-partisan Congressional "Privacy Caucus" wrote Google CEO Larry Page to say they were "curious whether this new technology could infringe on the privacy of the average American." The questions that followed made the point clearer: we don't know what this product will be, but we don't like it.
I shouldn't be flippant. It's certainly true that ever-smaller and ever-more-powerful mobile devices raise important questions about the costs and benefits of persistent surveillance, and of the line between personal autonomy and acceptable social behavior.
These, however, are more philosophical issues than legal problems, or at least they should be. We already have privacy laws on the books, and there's very little about Google Glass that suggests a need to start over. (Driving while using head-mounted displays is already either legal or illegal, depending on each state or country's law regarding distracted driving.)
What's more interesting has been Google's response to the uproar over a product that doesn't even have a launch date. By and large, the company has said nothing, other than to actively promote Glass's future release and highlight its great potential.
For years, I have advised companies of the importance of getting ahead of privacy concerns on new applications, especially those that might trigger the creepy response. That's because the real privacy constraint on companies has always been consumer outrage. Thanks to social media, that's become an ever-more potent force — arguably more compelling than anything lawmakers might do.
Once the moral panics start, it's impossible to predict where they will lead. So the wisest course is to head them off. The important take-away for product makers isn't so much about what they do with personalized data, but how they design and test a new offering, launch it, explain it to consumers, and provide tools for information management.
Public education and transparency can do a great deal to defuse angry mobs before they've had a chance to storm your castle. The difference between personalization that everyone loves from the beginning (Amazon and iTunes recommendations, TiVo suggestions) and personalization that stimulates a fatal rejection (Facebook Beacon, Google Buzz, LinkedIn personal ads) has little to do with the nature of the data being used. It's all in how you explain it. And you don't get a second chance.
But Google seems to be taking a different tack — at least so far — by largely ignoring the rising tide of negative commentary. The company has refused numerous requests for comment. At a developer's conference last week, Glass product director Steve Lee said only that "Privacy was top of mind when we designed the product."
It's hard to know if this is actually Google's strategy, or whether they're waiting for a more appropriate moment to leap directly into the fire. But maybe the company has evolved to the next stage of privacy management. Perhaps they've decided that saying anything in response to pre-launch fears of product misuse only adds fuel to a generalized moral panic that is now more-or-less persistent.
Given the high level of irrationality around privacy these days, Google might be onto something. Perhaps arguing logically with those who are reacting emotionally just makes things worse. One way or the other, marketing executives should keep a close eye on how the Google Glass story plays out. It may prove the best case study yet in how to — or not to — manage privacy fears.
An HBR Insight Center