The most commonly referenced definition of Artificial Intelligence (AI) is probably the Turing Test which avoids the tricky questions like ‘what is intelligence’ or ‘what does it mean to think’ and replaces them simply by the test of recognizing an anonymous agent as a fellow human (and, therefore, intelligent).
The most commonly referenced definition of Artificial Intelligence (AI) is probably the Turing Test which avoids the tricky questions like ‘what is intelligence’ or ‘what does it mean to think’ and replaces them simply by the test of recognizing an anonymous agent as a fellow human (and, therefore, intelligent).
A key aspect of this is, of course, that the interrogator is communicating with an equal in terms of the mode, pace and structure of dialogue (it is possible that a computer could succeed at this deception but reveal itself by being implausibly smarter than a human, but that is a question we can enjoy a little later).
As we have since learned, creating AIs is extremely hard and requires a very large portion of the smart people coming out of a broad spectrum of disciplines including software engineering, robotics, psychology, cognitive science and linguistics.
Another enabler is industrial context which brings the pragmatics of real world problems, and the scale and funding that is often not available to academics and if so doesn’t benefit from stark focus required to make progress that is provided by industry.
However, we currently have the following:
- Search engines that don’t understand language and which attempt to mediate between people (searches by people and documents by people),
- The best and the brightest coming to work for document oriented web companies.
I can’t help but wonder where the AI project would be today if web search (as it is currently envisioned) hadn’t gobbled up so much bandwidth.