Watson internet search engine is destructuve revolution in the internet search concept. When people get true and updated knowledge in the internet world, they do not accept any advertisement anymore. Online advertisement and marketing will be destroyed by the Watson internet search engine. Internet users will know true and updated knowledge about anything in the world, they will not look at any advertisement. Same revolution is valid for political campaigns too. Nobody will look at what is advertised by the politicians who campaign to be elected. So our politics will be free from money. No advertisement and marketing in anywhere in the online world. Watson search engine will kill the money concept totally. This will be effective in the outside world too. Nobody will believe anything other than updated true knowledge on the Watson search engine. As Watson search engine will give answers fast and very concise in a report window, nobody will want to look at endless list of websites. So websites will be useless too. Webbuilding concept will be meaningless and be killed too. Because of this truth, search engines like Google, Yahoo and Bing and developers who make money from webbuilding and marketing will fight against the Watson search engine revolution. Big businesses who manipulate political life with huge money will also fight against a Watson revolution. But internet users who seek only true and last minute updated knowledge will benefit from this Watson revolution. Watson internet will transform our world and free it from the slavery and corruption and lift mankind into a nexgen civilization. Our world will be free of wars, diseases, hunger, poverty, crises, dictators, corruption, crimes and of dangers of cosmic collision and radiaton too. That is why I am seeking partners to make a Watson search engine. But I know partners want to know my personal history, that is why I want to make a documentary about my past and present life compiling previous television films and press news about my fight against persecution and human rights abuses by the military regime and Ministry of Foreign Affairs of Turkey. I am looking for funding this documentary on the basis of supporting causes of democracy and human rights and nexgen civilization. I put some info about Watson below here. Always visit my blogsite here for more and updated info at:
Kurzweil: Why IBM‘s Jeopardy Victory Matters
The victory of the Watson Supercomputer over two Jeopardy! champions is one small step for IBM, one giant leap for computerkind.
Watson Takes on Jeopardy
If you watch Watson’s performance, it appears to be at least as good as the best Jeopardy! players at understanding the nature of the question (or I should say the answer, since Jeopardy! presents the answer and asks for the question, which I always thought was a little tedious). Watson is able to then combine this ability to understand the level of language in a Jeopardy! query with a computer’s innate ability to accurately master a vast corpus of knowledge. I’ve always felt that once a computer masters a human’s level of pattern recognition and language understanding, it would inherently be far superior to a human because of this combination.
We don’t know yet whether Watson will win this particular tournament, but it won the preliminary round and the point has been made, regardless of the outcome. There were chess machines before Deep Blue that just missed defeating the world chess champion, but they kept getting better and passing the threshold of defeating the best human was inevitable. The same is true now with Jeopardy!
Yes, there are limitations to Jeopardy! Like all games, it has a particular structure and does not probe all human capabilities, even within understanding language. However, it is going to be more difficult to seriously argue that there are human tasks that computers will never achieve. Jeopardy! does involve understanding complexities of humor, puns, metaphors and other subtleties. Computers are also advancing on myriad other fronts, from driverless cars (Google’s cars have driven 140,000 miles through California cities and towns without human intervention) to the diagnosis of disease.
Already commentators are beginning to point out the limitations of Jeopardy!—for example, that the short length of the queries limits their complexity. For those who would like to minimize Watson’s abilities, I’ll add the following. When human contestant Ken Jennings selects the “Chicks dig me” category, he makes a joke that is outside the formal game by saying “I’ve never said this on TV, ‘chicks dig me.'” Later on, Watson says, “Let’s finish Chicks Dig Me.” That’s also pretty funny and the audience laughs, but it is clear that Watson is clueless as to the joke it has inadvertently made.
However, Watson was never asked to make commentaries, humorous or otherwise, about the proceedings. It is clearly capable of dealing with a certain level of humor within the queries. If Watson were suitably programmed, I believe that it could make appropriate and humorous comments also about the situation it is in.
Watson runs on 90 servers, although it does not go out to the Internet. When will this capability be available on your PC? It was only five years between Deep Blue in 1997, which was a specialized supercomputer, and Deep Fritz in 2002, which ran on eight personal computers, and did about as well. This reduction in the size and cost of a machine that could play world-champion level chess was due both to the ongoing exponential growth of computer hardware and to improved pattern recognition software for performing the key move-countermove tree-pruning decision task. Computer price-performance is now doubling in less than a year, so 90 servers would become the equivalent of one in about seven years. Since a server is more expensive than a typical personal computer, we could consider the gap to be about 10 years.
But the trend is definitely moving towards cloud computing, in which supercomputer capability will be available in bursts to anyone, and Watson-like capability will be available to the average user much sooner. I do expect the type of natural language processing we see in Watson to show up in search engines and other knowledge retrieval systems over the next five years.
Watson: supercharged search engine or prototype robot overlord?
February 17, 2011 by Ben Goertzel
[+] My initial reaction to reading about IBM’s “Watson” supercomputer and software was a big fat ho-hum. “OK,” I figured, “a program that plays “Jeopardy!” may be impressive to Joe Blow in the street, but I’m an AI guru so I know pretty much exactly what kind of specialized trickery they’re using under the hood. It’s not really a high-level mind, just a fancy database lookup system.”
But while that cynical view is certainly technically accurate, I have to admit that when I actually watched Watson play “Jeopardy!” on TV — and beat its human opponents — I felt some real excitement … and even some pride for the field of AI. Sure, Watson is far from a human-level AI, and doesn’t have much general intelligence. But even so, it was pretty bloody cool to see it up there on stage, defeating humans in a battle of wits created purely by humans for humans — playing by the human rules and winning.
I found Watson’s occasional really dumb mistakes made it seem almost human. If the performance had been perfect there would have been no drama — but as it was, there was a bit of a charge in watching the computer come back from temporary defeats induced by the limitations of its AI. Even more so because I’m wholly confident that, 10 years from now, Watson’s descendants will be capable of doing the same thing without any stupid mistakes.
And in spite of its imperfections, by the end of its three day competition against human “Jeopardy!” champs Ken Jennings and Brad Rutter, Watson had earned a total of $77,147, compared to $24,000 for Jennings and $21,600 for Rutter. When Jennings graciously conceded defeat — after briefly giving Watson a run for its money a few minutes earlier — he quoted the line “And I, for one, welcome our new robot overlords.”
In the final analysis, Watson didn’t seem human at all — its IBM overlords didn’t think to program it to sound excited or to celebrate its victory. While the audience cheered Watson, the champ itself remained impassive, precisely as befitting a specialized question-answering system without any emotion module.
What does Watson Mean for AI?
But who is this impassive champion, really? A mere supercharged search engine, or a prototype robot overlord?
A lot closer to the former, for sure. Watson 2.0, if there is one, may make fewer dumb mistakes — but it’s not going to march out of the “Jeopardy!” TV studio and start taking over human jobs, winning Nobel Prizes, building femtofactories and spawning Singularities.
But even so, the technologies underlying Watson are likely to be part of the story when human-level and superhuman AGI robots finally do emerge.
“Jeopardy!” doesn’t have the iconic status of Chess or Go, but in some ways it cuts closer to the heart of human intelligence, focusing as it does on the ability to answer commonsense questions posed in human language. But still, succeeding at “Jeopardy!” requires a fairly narrow sort of natural language understanding — and understanding this is critical to understanding what Watson really is.
Watson is a triumph of the branch of AI called “natural language processing” (NLP) which combines statistical analysis of text and speech with hand-crafted linguistic rules to make judgments based on the syntactic and semantic structures implicit in language. Watson is not an intelligent autonomous agent like a human being, that reads information and incorporates it into its holistic world-view and understands each piece of information in the context of it own self, its goals, and the world. Rather, it’s an NLP-based search system — a purpose-specific system that matches the syntactic and semantic structures in a question with comparable structures found in a database of documents, and in this way tries to find answers to the questions in those documents.
Looking at some concrete “Jeopardy!” questions may help make the matter clearer; here are some random examples I picked from an online archive:
- This -ology, part of sociology, uses the theory of differential association (i.e., hanging around with a bad crowd)
- “Whinese” is a language they use on long car trips
- The motto of this 1904-1914 engineering project was “The land divided, the world united”
- Built at a cost of more than $200 million, it stretches from Victoria, B.C. to St. John’s, Newfoundland
- Jay Leno on July 8, 2010: The “nominations were announced today… there’s no ‘me’ in” this award
(Answers: criminology, children, the Panama Canal, the Trans-Canada Highway, the Emmy Awards.)
It’s worth taking a moment to think about these in the context of NLP-based search technology.
Question 1 stumped human “Jeopardy!” contestants on the show, but I’d expect it to be easier for an NLP based search system, which can look for the phrase “differential association” together with the morpheme “ology.”
Question 2 is going to be harder for an NLP based search system than for a human … but maybe not as hard as one might think, since the top Google hit for “whine ‘long car trip’ “ is a page titled Entertain Kids on Car Trip, and the subsequent hits are similar. The incidence of “kids” and “children” in the search results seems high. So the challenge here is to recognize that “whinese” is a neologism and apply a stemming heuristic to isolate “whine.”
Questions 3 and 4 are probably easier for an NLP based search system with a large knowledge base than for a human, as they contain some very specific search terms.
Question 5 is one that would be approached in a totally different way by an NLP based search system than by a human. A human would probably use the phonological similarity between “me” and “Emmy” (at least that’s how I answered the question). The AI can simply search the key phrases, e.g. “Jay Leno July 8, 2010 award” and then m any pages about the Emmys come up.
Now of course a human “Jeopardy!” contestant is not allowed to use a Web search engine while playing the show — this would be cheating! If this were allowed, it would constitute a very different kind of game show. The particular humans who do well at “Jeopardy!” are those with the capability to read a lot of text containing facts and remember the key data without needing to look it up again. However, an AI like Watson has a superhuman capability to ingest text from the Web or elsewhere and store it internally in a modified representation, without any chance of error or forgetting — just like you can copy a file from one computer to another without any mistakes, unless there’s an unusual hardware error like a file corruption.
So Watson can grab a load of “Jeopardy!”-relevant Web pages or similar documents in advance and store the key parts precisely in its memory, to use as the basis for question answering. And it can then do a rough (though somewhat more sophisticated) equivalent of searching in its memory for “whine ‘long car trip’ “ or “Jay Leno July 8, 2010 award” and finding the multiple results, and then statistically analyzing these multiple results to find the answer.
Whereas a human is answering many of these questions based on much more abstract representations, rather than by consulting an internal index of precise words and phrases.
Knowing this, you can understand how Watson made the stupid mistakes it did — like its howler, on the second night, of thinking Toronto was a US city. In that instance, the Final “Jeopardy!” category was “U.S. Cities,” and the clue was: “Its largest airport is named for a World War II hero, its second largest for a World War II battle.” Watson produced the odd response: “What is Toronto??????” — the question marks indicating that it had very low confidence in the response.
How could it choose Toronto in a category named “U.S. Cities”? Because its statistical analysis of prior “Jeopardy!” games told it that the category name was sometimes misleading — and because there is in fact a small US city named “Toronto.”
Of course, any human intelligent enough to succeed at “Jeopardy!” whatsoever, would have the common sense to know that if some city has a “largest airport,” it’s not going to be a small town like Toronto, Ohio. But Watson doesn’t work by common sense, it works by brute-force lookup against a large knowledge repository.
Both the Watson strategy and the human strategy are valid ways of playing “Jeopardy!” But, the human strategy involves skills that are fairly generalizable to many other sorts of learning (for instance, learning to achieve diverse goals in the physical world), whereas the Watson strategy involves skills that are only extremely useful for domains where the answers to one’s questions already lie in knowledge bases someone else has produced.
The difference is as significant as that between Deep Blue’s approach to chess, and Garry Kasparov’s approach. Deep Blue and Watson are specialized and brittle; Kasparov, Jennings and Rutter are flexible, adaptive agents. If you change the rules of chess a bit (say, tweaking it to be Fisher random chess), Deep Blue has got to be reprogrammed a bit, but Kasparov can adapt. If you change the scope of “Jeopardy!” to include different categories of questions, Watson would need to be retrained and retuned on different data sources, but Jennings and Rutter could adapt.
And general intelligence in everyday human environments — or in contexts like doing novel science or engineering — is largely about adaptation, about creative improvisation in the face of the fundamentally unknown, not just about performing effectively within clearly-demarcated sets of rules.
Wolfram on Watson
Stephen Wolfram, the inventor of Mathematica and Wolfram Alpha, wrote a very clear and explanatory blog post on Watson recently, containing an elegant diagram contrasting Watson with his own Wolfram Alpha system:
In his article he also gives some interesting statistics on search engines and “Jeopardy!,” showing that a considerable majority of the time, major search engines contain the answers to the “Jeopardy!” questions in the first few pages. Of course, this doesn’t make it trivial to extract the answers from these pages, but it nicely complements the qualitative analysis I gave above where I looked at 5 random “Jeopardy!” questions, and helps give a sense of what’s really going on here.
Neither Watson nor Alpha uses the sort of abstraction and creativity that the human mind does, when approaching a game like “Jeopardy!” Both systems use pre-existing knowledge bases filled with precise pre-formulated answers to the questions they encounter. The main difference between these two systems, as Wolfram observes, is that Watson answers questions by matching them against a large database of text containing questions and answers in various phrasings and contexts, whereas Alpha deals with knowledge that has been imported into it in structured, non-textual form, coming from various databases, or explicitly entered by humans .
Kurzweil on Watson
Ray Kurzweil has written glowingly of Watson as an important technology milestone
Indeed no human can do what a search engine does, but computers have still not shown an ability to deal with the subtlety and complexity of language. Humans, on the other hand, have been unique in our ability to think in a hierarchical fashion, to understand the elaborate nested structures in language, to put symbols together to form an idea, and then to use a symbol for that idea in yet another such structure. This is what sets humans apart.
That is, until now. Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.
I understand where Kurzweil is coming from, but nevertheless, this is a fair bit stronger statement than I’d make. As an AI researcher myself I’m quite aware of the all subtlety that goes into “thinking in a hierarchical fashion,” “forming ideas,” and so forth. What Watson does is simply to match question text against large masses of possible answer text — and this is very different than what an AI system will need to do to display human-level general intelligence. Human intelligence has to do with the synergetic combination of many things, including linguistic intelligence but also formal non-linguistic abstraction, non-linguistic learning of habits and procedures, visual and other sensory imagination, creativity of new ideas only indirectly related to anything heard or read before, etc. An architecture like Watson barely scratches the surface!
But Ray Kurzweil knows all this about the subtlety and complexity of human general intelligence, and the limited nature of the “Jeopardy!” domain — so why does Watson excite him so much?
Although Watson is “just” an NLP-based search system, it’s still not a trivial construct. Watson doesn’t just compare query text to potential-answer text, it does some simple generalization and inference, so that it represents and matches text in a somewhat abstracted symbolic form. The technology for this sort of process has been around a long time and is widely used in academic AI projects and even a few commercial products — but, the Watson team seems to have done the detail work to get the extraction and comparison of semantic relations from certain kinds of text working extremely well. I can quite clearly envision how to make a Watson-type system based on the NLP and reasoning software currently working inside our OpenCog AI system — and I can also tell you that this would require a heck of a lot of work, and a fair bit of R&D creativity along the way.
Kurzweil is a master technology trendspotter, and he’s good at identifying which current developments are most indicative of future trends. The technologies underlying Watson aren’t new, and don’t constitute much direct progress toward the grand goals of the AI field. What they do indicate, however, is that the technology for extracting simple symbolic information from certain sorts of text, using a combination of statistics and rules, can currently be refined into something highly functional like Watson, within a reasonably bounded domain.
Granted it took an IBM team 4 years to perfect this, and and granted “Jeopardy!” is a very narrow slice of life — but still Watson does bespeak that semantic information extraction technology has reached a certain level of maturity. And while Watson’s use of natural language understanding and symbol manipulation technology is extremely narrowly-focused, the next similar project may be less so.
Today “Jeopardy!,” Tomorrow the World?
Am I as excited about Watson as Ray Kurzweil’s article suggests? In spite of the excitement I felt at watching Watson’s performance — no, not really. Watson is a fantastic technical achievement, and should also be a publicity milestone roughly comparable to Deep Blue’s chess victory over Kasparov. But question answering doesn’t require human-like general intelligence — unless getting the answers involves improvising in a conceptual space not immediately implied by the available information … which is of course not the case with the “Jeopardy!” questions.
Ray’s response does contain some important lessons, such as the value of paying attention to the maturity levels of technologies, and what the capabilities of existing applications imply about this, even if the applications themselves aren’t so interesting or have obvious limitations. But it’s important to remember the difference between the “Jeopardy!” challenge and other challenges that would be more reminiscent of human-level general intelligence, such as
- Holding a wide-ranging English conversation with an intelligent human for an hour or two
- Passing the third grade, via controlling a robot body attending a regular third grade class
- Getting an online university degree, via interacting with the e-learning software (including social interactions with the other students and teachers) just as a human would do
- Creating a new scientific project and publication, in a self-directed way from start to finish
What these other challenges have in common is that they require intelligent response to a host of situations that are unpredictable in their particulars — so they require adaptation and creative improvisation, to a degree that highly regimented AI architectures like Deep Blue or Watson will never be able to touch.
Some AI researchers believe that this sort of artificial general intelligence will eventually come out of incremental improvements to “narrow AI” systems like Deep Blue, Watson and so forth. Many of us, on the other hand, suspect that Artificial General Intelligence (AGI) is a vastly different animal (and if you want to get a dose of the latter perspective, show up at the AGI-11 conference on Google’s campus in Mountain View this August). In this AGI-focused view, technologies like those used in Watson may ultimately be part of a powerful AGI architecture, but only when harnessed within a framework specifically oriented toward autonomous, adaptive, integrative learning.
But … well … even so, it was pretty damn funky watching an audience full of normal-looking, non-AI-geek people sitting there cheering for the victorious intellectual accomplishments of a computer!
Reprinted with permission from H+ Magazine
|Jeopardy, IBM, and Wolfram|Alpha
January 26, 2011
About a month before Wolfram|Alpha launched, I was on the phone with a group from IBM, talking about our vision for computable knowledge in Wolfram|Alpha. A few weeks later, the group announced that they were going to use what they had done in natural language processing to try to make a system to compete on Jeopardy.
I thought it was a brilliant way to showcase their work—and IBM’s capabilities in general. And now, a year and a half later, IBM has built an impressive level of anticipation for their upcoming Jeopardy television event. Whatever happens (and IBM’s system certainly should be able to win), one thing is clear: what IBM is doing will have an important effect in changing peoples’ expectations for how they might be able to interact with computers.
When Wolfram|Alpha was launched, people at first kept on referring to it as a “new search engine”—because basically keyword search was the only model they had for how they might find information on a large scale. But IBM’s project gives a terrific example of another model: question answering. And when people internalize this model, they’ll be coming a lot closer to realizing what’s possible with what we’re building in Wolfram|Alpha.
So what really is the relation between Wolfram|Alpha and the IBM Jeopardy project?
IBM’s basic approach has a long history, with a lineage in the field of information retrieval that is in many ways shared with search engines. The essential idea is to start with textual documents, and then to build a system to statistically match questions that are asked to answers that are represented in the documents. (The first step is to search for textual matches to a question—using thesaurus-like and other linguistic transformations. The harder work is then to take the list of potential answers, use a diversity of different methods to score them, and finally combine these scores to choose a top answer.)
Early versions of this approach go back nearly 50 years, to the first phase of artificial intelligence research. And incremental progress has been made—notably as tracked for the past 20 years in the annual TREC (Text Retrieval Conference) question answering competition. IBM’s Jeopardy system is very much in this tradition—though with more sophisticated systems engineering, and with special features aimed at the particular (complex) task of competing on Jeopardy.
Wolfram|Alpha is a completely different kind of thing—something much more radical, based on a quite different paradigm. The key point is that Wolfram|Alpha is not dealing with documents, or anything derived from them. Instead, it is dealing directly with raw, precise, computable knowledge. And what’s inside it is not statistical representations of text, but actual representations of knowledge.
The input to Wolfram|Alpha can be a question in natural language. But what Wolfram|Alpha does is to convert this natural language into a precise computable internal form. And then it takes this form, and uses its computable knowledge to compute an answer to the question.
There’s a lot of technology and new ideas that are required to make this work. And I must say that when I started out developing Wolfram|Alpha I wasn’t at all sure it was going to be possible. But after years of hard work—and some breakthroughs—I’m happy to say it’s turned out really well. And Wolfram|Alpha is now successfully answering millions of questions on the web and elsewhere about a huge variety of different topics every day.
And in a sense Wolfram|Alpha fully understands every answer it gives. It’s not somehow serving up pieces of statistical matches to documents it was fed. It’s actually computing its answers, based on knowledge that it has. And most of the answers it computes are completely new: they’ve never been computed or written down before.
In IBM’s approach, the main part of the work goes into tuning the statistical matching procedures that are used—together in the case of Jeopardy with adding a collection of special rules to handle particular situations that come up.
In Wolfram|Alpha most of the work is just adding computable knowledge to the system. Curating data, hooking up real-time feeds, injecting domain-specific expertise, implementing computational algorithms—and building up our kind of generalized grammar that captures the natural language used for queries.
In developing Wolfram|Alpha, we’ve been steadily building out different areas of knowledge, concentrating first on ones that address fairly short questions that people ask, and that are important in practice. We’re almost exactly at the opposite end of things from what’s needed in Jeopardy—and from the direct path that IBM has taken to that goal. There’s no doubt that in time Wolfram|Alpha will be able to do things like the Jeopardy task—though in an utterly different way from the IBM system—but that’s not what it’s built for today.
(It’s an interesting metric that Wolfram|Alpha currently knows about three quarters of the entities that arise in Jeopardy questions—which I don’t consider too shabby, given that this is pretty far from anything we’ve actually set up Wolfram|Alpha to do.)
In the last couple of weeks, though, I’ve gotten curious about what’s actually involved in doing the Jeopardy task. Forget Wolfram|Alpha entirely for a moment. What’s the most obvious way to try doing Jeopardy?
What about just using a plain old search engine? And just feeding Jeopardy clues into it, and seeing what documents get matched. Well, just for fun, we tried that. We sampled randomly from the 200,000 or so Jeopardy clues that have been aired. Then we took each clue and fed it as input (without quotes) to a search engine. Then we looked at the search engine result page, and (a) saw how frequently the correct Jeopardy answer appeared somewhere in the titles or text snippets on the page, and (b) saw how frequently it appeared in the top document returned by the search engine. (More details are given in this Mathematica notebook [download Player here]. Obviously we excluded sites that are specifically about Jeopardy!)
If nothing else, this gives us pretty interesting information about the modern search engine landscape. In particular, it shows us that the more mature search systems are getting to be remarkably similar in their raw performance—so that other aspects of user experience (like Wolfram|Alpha integration!) are likely to become progressively more important.
But in terms of Jeopardy, what we see is that just using a plain old search engine gets surprisingly far. Of course, the approach here isn’t really solving the complete Jeopardy problem: it’s only giving pages on which the answer should appear, not giving specific actual answers. One can try various simple strategies for going further. Like getting the answer from the title of the first hit—which with the top search engines actually does succeed about 20% of the time.
But ultimately it’s clear that one’s going to have to do more work to actually compete on Jeopardy—which is what IBM has done.
So what’s the broader significance of the Jeopardy project? It’s yet another example of how something that seems like artificial intelligence can be achieved with a system that’s in a sense “just doing computation” (and as such, it can be viewed as yet another piece of evidence for the general Principle of Computational Equivalence that’s emerged from my work in science).
But at a more practical level, it’s related to an activity that has been central to IBM’s business throughout its history: handling internal data of corporations and other organizations.
There are typically two general kinds of corporate data: structured (often numerical, and, in the future, increasingly acquired automatically) and unstructured (often textual or image-based). The IBM Jeopardy approach has to do with answering questions from unstructured textual data—with such potential applications as mining medical documents or patents, or doing ediscovery in litigation. It’s only rather recently that even search engine methods have become widely used for these kinds of tasks—and with its Jeopardy project approach IBM joins a spectrum of companies trying to go further using natural-language-processing methods.
When it comes to structured corporate data, the Jeopardy project approach is not what’s relevant. And instead here there’s a large industry based on traditional business intelligence and data mining methods—that in effect allow one to investigate structured data in structured ways.
And it’s in this area that there’s a particularly obvious breakthrough made possible by the technology of Wolfram|Alpha: being able for the first time to automatically investigate structured data in completely free-form unstructured ways. One asks a question in natural language, and a custom version of Wolfram|Alpha built from particular corporate data can use its computational knowledge and algorithms to compute an answer based on the data—and in fact generate a whole report about the answer.
So what kind of synergy could there be between Wolfram|Alpha and IBM’s Jeopardy approach? It didn’t happen this time around, but if there’s a Watson 2.0, it should be set up to be able to call the Wolfram|Alpha API. IBM apparently already uses a certain amount of structured data and rules in, for example, scoring candidate answers. But what we’ve found is that even just in natural language processing, there’s much more that can be done if one has access to deep broad computational knowledge at every stage. And when it comes to actually answering many kinds of questions, one needs the kind of ability that Wolfram|Alpha has to compute things.
On the other side, in terms of data in Wolfram|Alpha, we mostly concentrate on definitive structured sources. But sometimes there’s no choice but to try to extract structured data from unstructured textual sources. In our experience, this is always an unreliable process (achieving at most perhaps 80% correctness)—and so far we mostly use it only to “prime the pump” for later expert curation. But perhaps with something like IBM’s Jeopardy approach it’ll be possible to get a good supply of probabilistic candidate data answers—that can themselves be used as fodder for the whole Wolfram|Alpha computational knowledge engine system.
It’ll be interesting to see what the future holds for all of this. But for now, I shall simply look forward to IBM’s appearance on Jeopardy.
IBM has had a long and distinguished history of important R&D—something a disappointingly small number of companies can say today. I have had some good friends at IBM Research (sadly, not all still alive), and IBM as a company has much to be admired. It’s great to see IBM putting on such an impressive show, in an area that’s so close to my own longstanding interests.
Good luck on Jeopardy! I’ll be rooting for you, Watson.