COMPUTERS AND AI

Gary Marcus writes in the New Yorker about the state of artificial intelligence, and how we take it for granted that AI involves a very particular, very narrow definition of intelligence. A computer’s ability to answer questions is still largely dependent on whether the computer has seen that question before. Quoting:

 

“Siri and Google’s voice searches may be able to understand canned sentences like ‘What movies are showing near me at seven o’clock?,’ but what about questions—’Can an alligator run the hundred-metre hurdles?’—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better. In a terrific paper just presented at the premier international conference on artificial intelligence (PDF), Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. …Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. … To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game …”

One comment on “COMPUTERS AND AI

  1. nannus says:

    The problem of AI is that it is searching for a fixed structure of cognition. This does not exist, the core of intelligence is creativity, i.e. the ability to develop out of the scope of any formal theory (or algorithm) describing cognition. It is possible for computer programs to have that ability (see http://creativiticphilosophy.wordpress.com/2013/02/06/an-e%EF%AC%80ective-procedure-for-computing-uncomputable-functions/).

Leave a comment