Path: news.net.uni-c.dk!howland.erols.net!news3.bellglobal.com!news.uunet.ca!prairie.attcanada.net!172.31.25.105!rockie.attcanada.net!newsfeed.attcanada.net!216.58.1.11!nntp.igs.net!news.igs.net!not-for-mail From: "donald tees" Newsgroups: comp.ai.neural-nets,comp.lang.apl,comp.lang.awk,comp.lang.beta,comp.lang.cobol,comp.lang.dylan,comp.lang.forth Subject: Re: Einstein's Riddle Date: Wed, 14 Mar 2001 06:47:26 -0500 Organization: IGS - Information Gateway Services Lines: 59 Message-ID: <98nlos$d2n$1@news.igs.net> References: <3AACB567.A59B8497@Azonic.co.nz> <3AACE6CF.7F05484D@ieee.org> <0W8r6.178$fo5.14165@news.get2net.dk> <3AAD60F3.120F284A@ieee.org> <3AAE371A.2F9F596F@brazee.net> <98m43a$fe2$1@localhost.localdomain> <3AAEAD1A.BCDE11DB@ix.netcom.com> <98mugg$2mj$1@news.igs.net> <3AAF13CA.C7EA3113@ix.netcom.com> NNTP-Posting-Host: ttyd02.kw.igs.net X-Trace: news.igs.net 984570460 13399 216.58.99.162 (14 Mar 2001 11:47:40 GMT) X-Complaints-To: abuse@igs.net NNTP-Posting-Date: 14 Mar 2001 11:47:40 GMT X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 5.50.4522.1200 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4522.1200 Xref: news.net.uni-c.dk comp.ai.neural-nets:67570 comp.lang.apl:29394 comp.lang.awk:17161 comp.lang.beta:12761 comp.lang.cobol:102641 comp.lang.dylan:24182 comp.lang.forth:78577 I think you have a weird idea of the Turing test. It says nothing about mistakes, or about speed of calculations ... that is you deciding how you would try to implement a Turing test. The Turing test states that if you cannot tell the difference after an extended conversation, then there is no difference. There is a lot more subtly to that than speed of speech. "J Thomas" wrote in message news:3AAF13CA.C7EA3113@ix.netcom.com... > donald tees wrote: > > "J Thomas" wrote in message > > > aph@redhat.invalid wrote: > > > > In comp.lang.forth Howard Brazee wrote: > > > > > : But the trouble with defining whether or not we have AI is that > > > > : there is no solid arrival point. > > > > > Sure there is: the Turing Test [1]. That's why it was invented. > > > > But the Turing Test only checks whether the program can imitate the > > > particular forms of stupidity common to human beings. It doesn't > > > work as an intelligence test. > > > Sure it does. > > If you're doing the Turing Test, and you ask what is > > 1355693147 * 25190678237 > > and you get a quick correct answer, you can conclude that it probably > isn't human. > > One thing needed to pass the Turing Test is to make the kind of logic > mistakes that humans make. > > The Turing Test isn't about a program that's good at finding solutions > to problems, or a program that's good at redefining problems to make > them easier to solve. The Turing Test is about a program that's good at > imitating stupid humans. > > > If you take that line, then the only logical endpoint > > is that there is no such thing as intelligence (which may be true). > > I doubt that there's a unitary intelligence. Different brains are good > at solving different problems. We won't be ready to understand the > intelligence of oak trees until we get a feel for what problems they > have to solve. If understanding the problems of oak trees and their > solutions turns out not to be in our repertoire then we may never notice > their intelligence. > > The point of AI shouldn't be to imitate humans. We already have a lot > of humans who're good at doing that. My thought is that since at > present computers have as their sole ecological niche to serve humans, > useful AI would involve predicting what humans will want well enough to > be ready to give it to them when they first need it, before they think > to ask for it.