Trial by Errors: Google Experiments With Changing Users' Risky Behavior

Photographer: Paul Bradbury

Using police images as warnings actually sped up users who visited flagged sites. Close

Using police images as warnings actually sped up users who visited flagged sites.

Photographer: Paul Bradbury

Using police images as warnings actually sped up users who visited flagged sites.

Behind the scenes at Google, experts in human psychology have been experimenting with subtle ways of dealing with one of the biggest vulnerabilities to online security: you.

Amid all the concerns about sophisticated hacker attacks, often it's simply the user's action, such as clicking to a suspicious website, that poses the greatest risk.

The solution might sound easy -- just tell users, "Don't click here." But a delicate balance is required. If Google comes across like a scold or throws up heavy-handed warnings, users might turn elsewhere for their e-mail and search needs. And that's no small concern, since the company's $60 billion a year in revenue depends on people spending enough time on Google.

More on How Innovation Happens:

That's where Sunny Consolvo comes in. The job of her team is to come up with so-called persuasive technology, methods that are designed to change a user's behavior without high-pressure tactics. But the art of gentle arm-twisting, as Consolvo has discovered, isn't straightforward in the digital world. The results she expected weren't the ones she got.

Instead, invention at Google -- and at so many other companies -- is by trial and error. It's toilsome, time-consuming, and dismissive of smart hunches and sound expectations. And until a great security solution strikes Consolvo and her team in the middle of the night or while taking a shower, it's an effort still unfinished.

Here are several examples of what didn't work.

The All-Seeing Eye

In the physical world, people change their behaviors when they believe others are watching them. But that's not always the case online.

Consolvo was part of a team that tested whether showing browser users images of a policeman, a masked bandit or a red traffic light in warning messages would make them pause and refrain from clicking through to sites that may be fraudulent.

As it turned out, images of the policeman and red traffic light actually sped users up, which led to an increase in the number of people who visited those bad sites.

The researchers thought the watching eyes of the policeman would work for sure because "that's what all the literature said would work," she said. "And it didn't."

But her team hasn't given up on it. The image of the criminal did slightly slow people down and reduce click-throughs, probably because it aroused a fear response.

One and Done

Another approach that didn't work involved pop-up messages that asked users multiple times if they wanted to proceed to a page that might be dangerous. The vast majority of people -- 98 percent -- who clicked through the initial warning clicked through a subsequent one as well, a sign that they may have believed the warning was a mistake. Consolvo points out that the secondary warnings may have been too mild. More extreme alerts like "Click here to proceed to malware" might do the trick, she said.

Lost in Translation

Of course, people won't respond to security warnings if they have no idea what they mean.

Google briefly considered using simple metaphors such as a bolt on a door or a stop sign to encourage users to sign up for two-factor authentication, or getting text-message codes when logging into accounts. But not every country uses bolts on their doors or stop signs, which international members of Google's staff pointed out. The approach was shelved.

Peer Pressure

One idea Google is tinkering with is using peer pressure, such as telling users who are about to take a risky step how many others made a different decision that was safer.

"When we're unsure what action to take, we tend to let the actions of others guide us," Consolvo said.

For example, a warning could appear that says 80 percent of users don't click through to known malware sites. There is a catch, however, when the numbers aren't in Google's favor. Pointing out that 30 percent of people don't click through to suspected fraud sites might undermine the warning and actually lead to an increase in click-throughs.

"The big lesson is this stuff is really complicated to get right," she said.

Press spacebar to pause and continue. Press esc to stop.

Bloomberg reserves the right to remove comments but is under no obligation to do so, or to explain individual moderation decisions.

Please enable JavaScript to view the comments powered by Disqus.