It often seems as though Facebook’s main purpose is to remind us continually of how much we have chosen to share with the world about our online behavior—whether we realize it or not. The latest lesson along those lines comes from the social network’s new “graph search,” which sounds at first like a fairly boring feature of interest only to marketers. Like much of what Facebook does, however, it is also a warning sign: If you were counting on certain things about yourself staying not so much private as obscure or hidden from view, those days are effectively over.
For an example of what this means in practice, look no further than a new Tumblr blog started by London-based programmer Tom Scott, entitled “Actual Facebook Graph Searches.” This also sounds somewhat dry and academic, until you take a closer look at some of the things that Facebook makes it trivially easy to search for—things like “Islamic men interested in men who live in Tehran, Iran” (where homosexuality is a crime punishable by death) or “family members of people who live in China and like Falun Gong,” the latter being a religious group whose members are routinely persecuted.
Many of these searches may be figments of Tom Scott’s overactive imagination (at least for now), but the fact is that Facebook’s graph search makes them relatively easy to conduct. Dave Morin, a former Facebook executive who left to start the mobile social network Path, has pointed out that the company has had this kind of interest-graph powered search for some time—and this kind of targeting based on “likes” and interests, friends and followed pages, etc. has been available in a different form for advertisers. But it is far more robust and more public now.
In a FAQ on the blog, Scott says he’s not trying to make any deep arguments about privacy, and his Tumblr blog is subtitled: “Don’t worry, we’ll all be used to this in a few weeks’ time.” But it’s still worth thinking about the implications of Facebook’s graph search, especially given the fact that many people don’t seem to appreciate the nuances of the network’s privacy settings—something that Facebook doesn’t really make simple or easy to figure out. Even Chief Executive Mark Zuckerberg’s sister Randi was recently blindsided by them, so what chance do the rest of us have?
In a recent piece in the Atlantic, philosophy professor Evan Selinger and his co-author, Woodrow Hartzog, argued that in many ways it’s not really helpful to talk about privacy, which is a vague concept in a world of real-time information, “frictionless sharing,” and “data exhaust” (the information we give off as we move around the Internet, often without realizing it). Instead, they argue that what we are really losing is the protection of obscurity—in the sense that information that was technically public before but difficult to find provided a form of privacy through obscurity:
“Obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion’s share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.”
As Selinger notes, the recent publication of a map of New York State’s registered gun owners made much the same point as Scott’s examples of disturbing Facebook graph searches. The information about who has a gun permit was public by default in New York when the map was published (although a new law has been proposed that would make it private), but it was difficult to collect and so hardly anyone bothered. Other kinds of information are also technically public—government databases and so on—but difficult or impossible to extract useful data from.
Of course, the same thing was the case with much of the world’s information before Google came along, and we had to learn to adjust to the fact that “the Internet never forgets”—that the information you posted online years ago (or information that was posted about you by others) without really thinking of the consequences can come back to haunt you. But Facebook has taken that to a whole new level of intimacy, because much of what occurs there seems so ephemeral: a “like,” a follow, a click—things you might not even remember doing.
While they may seem ephemeral, each of these can be as permanent as anything else on the Internet, and just as public, unless you can master the intricacies of what Facebook lets you hide and what it doesn’t (and as Scott notes in his FAQ, you probably shouldn’t rely on that, anyway). As Megan Garber of the Atlantic points out in a post, the social network is essentially constructing a virtual version of you from all those signals—a version that is categorized by all your activities and interests, some of which may be harmless and some of which may not.
It may seem absurd that someone might say they “like” racism or that anyone would actually search for that behavior and make use of it somehow. But if we have learned anything from the era of big data, it is that if the information is available—as dirty or questionable as it might be—someone is going to make use of it, and not always in the way you would like them to.
Also from GigaOM:
Social Fourth-Quarter 2012 Analysis (subscription required)