If Search Engines Played Jeopardy, Which One Would Win?

The recent victory of IBM’s Watson computer against human competitors in an exhibition round of Jeopardy got computer scientist Stephen Wolfram thinking about how regular search engines might fare in such a match-up. So he took 200,000 known Jeapardy clues and ran them through six search engines (Google, Bing, Ask, Blekko, Wikipedia Search, and Yandex). He excluded known Jeopardy sites from the results, and didn’t test his own Wolfram Alpha because it is not designed for those kinds of queries.

What he found is that the search engines did fairly well, depending on how you measure success. Google did slightly better than the rest, but Bing and Ask were close behind. On average, Google got the correct answer somewhere on its first results page 69 percent of the time, versus 68 percent for Ask and 63 percent for Bing. Google got the right answer somewhere in the title or snippet of text of the very top result 66 percent of the time, versus 65 percent for Bing (and Ask dropped to 51 percent).

In comparison, most humans answer 60 percent of Jeopardy clues correctly, while the top player of all time, Ken Jennings, answered 79 percent correctly. So it is conceivable that a system could be created using regular search engines that could beat most humans. But Wolfram cautions:

Of course, the approach here isn’t really solving the complete Jeopardy problem: it’s only giving pages on which the answer should appear, not giving specific actual answers. One can try various simple strategies for going further. Like getting the answer from the title of the first hit—which with the top search engines actually does succeed about 20% of the time.

Answering Jeopardy clues correctly and consistently is a hard problem for computers to solve because of all the variations and nuances of human language. Yet “just using a plain old search engine gets surprisingly far,” concludes Wolfram.