Artificial Intelligence? I'll Say. Why Computers Can't Jokeby
The weekend of March 8, at the American Crossword Puzzle Competition in Brooklyn, a computer program known as Dr. Fill matched wits with the nation’s top crossword solvers and was humbled. Dr. Fill’s score was good enough to finish 141st out of 600 human contestants—near the low end of the range that its creator, the artificial intelligence expert Matthew Ginsberg, had predicted.
Dr. Fill, Ginsberg told the New York Times, had a few weaknesses and this year’s tournament exposed them. One is unorthodox puzzles that require words to be spelled backwards or diagonally. The other: Clues that make sense only if you realize that they’re puns—Dr. Fill, being a computer, doesn’t have much of a sense of humor.
Can a computer be taught to be funny? It doesn’t seem nearly as important an endeavor as getting computers to identify malignant tumors or prevent airplanes from crashing, but being able to model humor is a key problem in attempting to model human thought. There are few things more identifiably human than cracking wise.
Among the few researchers trying to make computers understand humor are Lawrence Mazlack, a computer science professor at the University of Cincinnati, and his former student, Julia Taylor, a professor at Purdue. Puns and jokes work, Mazlack argues, because of expectations: We expect one thing and we’re surprised when we get another. (“Orange who?” “Orange you glad to see me?”) But game-playing programs such as Dr. Fill, or Deep Blue for that matter, rely on brute-force calculation, rather than interpretation. Dr. Fill isn’t trying to figure out what the clue actually means, it just cycles through every possible option until it finds the best fit. Computers don’t get jokes (or metaphors, for that matter) because they have no expectations to subvert.
“Suppose that you are going to China or Japan or Korea and you are not familiar with the writing,” offers Taylor. “You can just look at the symbols and when you see a match, you know that’s what you are looking for. The computers are doing about the same thing. There are very sophisticated algorithms involved, but essentially they are at that level.”
Mazlack, on the other hand, is trying to program computers to have expectations and tease out meaning—to think, as he puts it, in terms of “ontologies.” As he sees it, training a computer to do crosswords or play chess can test if they can do the sort of things human beings do, while getting them to understand jokes is a good way to get them to begin to work the way the human brain works. And as a computational machine, the human brain remains singularly efficient; by some estimates, it can perform 38 thousand trillion operations per second while using about 20 watts.
“If a computer can’t come to understand what’s a joke and what’s not a joke, then it’s not going to be able to think like a person,” Mazlack says. “We understand jokes starting at age four, or three. ‘Why does the chicken cross the road?’ I think that’s a four-year-old joke, but we can’t get a computer to answer that kind of question.”
Mazlack and Taylor, working together and then separately, have worked to develop programs that can identify and even create jokes. At this point, it’s safe to say that stand-up comics are not among the professions with the most to fear from artificial intelligence. But Mazlack argues that even simple humor-detection capabilities would have their uses. One he offers is a sort of humor screen that would detect unintentional puns or jokes in memos or e-mail, saving the writer from embarrassment—this sort of humor-proofing would be especially useful, he says, for people writing in a non-native tongue.
Matthew Ginsberg, Dr. Fill’s creator, agrees that a sense of humor could give his program an edge in next year’s tournament, but he has no plans to develop one in the near future. “I have a long list of things that I imagine I will do essentially all of by next year’s tournament that will make [Dr. Fill] considerably stronger, and explaining humor to it is way too hard to be on that list,” he says. Ginsberg makes a larger point, as well. We should be happy, he says, that all the best game-playing programs—whether in poker or bridge or crosswords or Jeopardy—”play” in such an inhuman way.
“To my mind, it’s actually a good thing because computers have natural domains of competence that are very different from ours,” he says. “And it’s good that we’re different because it means we’re not natural competitors, we’re natural cooperators.”