Abstract
To pass the Turing Test, by definition a machine would have to be able to carry on a natural language dialogue, and know enough not to make a fool of itself while doing so. But – and this is something that is almost never discussed explicitly – for it to pass for human, it would also have to exhibit dozens of different kinds
of incorrect yet predictable reasoning – what we might call translogical reasoning. Is it desirable to build such foibles into our programs? In short, we need to unravel several issues that are often tangled up together: How could we get a machine to
pass the Turing Test? What should we get the machine to do (or not do)? What have we done so far with the Cyc common sense knowledge base and inference system? We describe the most serious technical hurdles we faced, in building Cyc to date, how they each were overcome, and what it would take to close the remaining
Turing Test gap.
Users
Please
log in to take part in the discussion (add own reviews or comments).