Meet the Machines That Know What's Funny
“I’d like to buy a new boomerang please. Also, can you tell me how to throw the old one away?”
Never mind whether you think that joke is funny. Do you think your best friend would like it?
You might think you know the answer; after all, people like each other partly because they make each other laugh. At the very least, you might be confident that a mere machine, equipped with data about how other people react to jokes, couldn’t do better than you in answering the question of what your best friend will find funny.
If so, think again.
Researchers led by Mike Yeomans of Harvard University have found that an automated recommender system -- essentially an algorithm based on a lot of data -- does a lot better than human beings (strangers, friends, family or spouses) at guessing what any individual person will find funny. (The team includes two economists, a behavioral scientist and a computer scientist.) Their paper has massive implications for other domains, including medicine, investment choices, regulation, welfare policy, and even the criminal justice system.
In one of the researchers’ studies, they recruited 61 pairs of people visiting a museum in Chicago. Every pair had come to the museum together, and most were friends or family members. Participants were seated at computer terminals, where they could not see each other; each read the same 12 jokes, presented in random order, and was asked to rate their funniness.
Then they switched computer terminals, and read a random sample of their partner’s actual ratings for four of the jokes. At that point, they were asked to predict their partner’s ratings for the remaining eight. All participants both ranked jokes on their own and made forecasts about their partners’ likely ratings. People turned out to be pretty good at predicting their partner’s evaluations.
But they didn’t do nearly as well as a machine recommender system, based on ratings from 454 people (including those in this study and previous ones with the same jokes). The recommender system used an algorithm to model the relationship between people’s ratings of any random sample of four jokes and their likely ratings of the remaining jokes.
Yeomans and his colleagues conducted several other contests between human and machine joke recommendations, with thousands of people, and the machines were always more accurate than human beings. Their evidence strongly suggests that if you want to know what your spouse will find funny, you would likely to do best to consult an algorithm.
What makes this startling and important is that what people will find funny seems highly subjective -- a matter of taste, not readily quantified. (After Senator Ted Cruz won the Maine primary, Donald Trump joked, “He should do well in Maine because it’s very close to Canada.” I laughed; my friends didn’t.) If an algorithm can outperform human beings in that unlikely context, machine learning could have benefits in many more settings than we think.
Consider judicial decisions about whether to release prisoners on bail. Judges must weigh the risk not just of flight but also of crime (including violent crime) -- but if they deny people bail, they might impose devastating consequences on innocent human beings. In assessing risks, we might want to trust our judges, who have a great deal of experience and also access to detailed information about particular prisoners. We might think that no algorithm could possibly do as well.
But here again, we would be wrong. A research team led by Cornell University’s Jon Kleinberg, who also coauthored the research on jokes, has found that a bail algorithm, based on a large dataset, outperforms judges by far (measured by the likelihood that those released will commit crimes). The team’s work remains in progress, but a preliminary report suggests that judges are releasing a lot of dangerous people -- and that if judges used the algorithm, they could cut by 20 percent the number of crimes committed by those out on bail (without decreasing the number of releases).
A lot of research points to similar conclusions. People should avoid having joint replacements, which can be quite painful (and a waste of money) if they are going to die soon after the operation; an algorithm can do an excellent job of predicting who is likely to fall in that category. Similarly, if the goal is to identify the credit-worthiness of mortgage applicants, to target health inspections of restaurants, or to see which young people would most benefit from educational interventions, machine learning could have big benefits.
The mounting research raises a legitimate question: What shouldn’t algorithms do? One answer would insist that human beings need to retain the discretion to specify the goals of any particular policy. But when it comes to finding the best way to reach those goals, machines are getting better every day.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:
Cass R Sunstein at email@example.com
To contact the editor responsible for this story:
Christopher Flavelle at firstname.lastname@example.org