Path: news.net.uni-c.dk!logbridge.uoregon.edu!news.maxwell.syr.edu!news.mindspring.net!not-for-mail From: J Thomas Newsgroups: comp.ai.neural-nets,comp.lang.apl,comp.lang.awk,comp.lang.beta,comp.lang.cobol,comp.lang.dylan,comp.lang.forth Subject: Re: Einstein's Riddle Date: Mon, 19 Mar 2001 09:35:25 -0600 Organization: MindSpring Enterprises Lines: 42 Message-ID: <3AB6273D.E6C18536@ix.netcom.com> References: <3AACB567.A59B8497@Azonic.co.nz> <3AACE6CF.7F05484D@ieee.org> <0W8r6.178$fo5.14165@news.get2net.dk> <3AAD60F3.120F284A@ieee.org> <3AAE371A.2F9F596F@brazee.net> <98m43a$fe2$1@localhost.localdomain> <3AAFB378.AB166E8C@ieee.org> <98q3f1$bid$1@localhost.localdomain> <3AB0DFC6.FC100A64@ieee.org> <98sq19$ton$1@localhost.localdomain> <3AB61B58.CF536ECA@brazee.net> NNTP-Posting-Host: 3f.27.04.17 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Server-Date: 19 Mar 2001 15:37:02 GMT X-Mailer: Mozilla 4.76 [en] (Win98; U) X-Accept-Language: en Xref: news.net.uni-c.dk comp.ai.neural-nets:67640 comp.lang.apl:29459 comp.lang.awk:17252 comp.lang.beta:12795 comp.lang.cobol:102914 comp.lang.dylan:24229 comp.lang.forth:78684 Howard Brazee wrote: > aph@redhat.invalid wrote: > > Sure, you could label a teapot intelligent if you wanted but no-one > > would have to agree. The magic of Turing's test is that they would > > agree. > But if you're writing an AI application, Turing's is probably not > relevant. We don't need a computer to act like people, we have > plenty of people available to do that. We need an AI program to do > specific tasks with intelligence tailored to those tasks. Yes. I can see the Turing test as a philosophical device. In Turing's time england still had a lot of prejudice about ethnic groups whose members looked just like the people who discriminated against them. And so somebody might say "I don't like , they're all " and somebody else could say "Well, you know Joe, that you've been casually friendly with all this time, he's and you've never complained about him being ." And of course Joe would be very thoroughly assimilated. It doesn't work as well now that the are more likely to be physically distinguishable. So anyway, I can easily see Turing wanting to apply the same standards exactly. "You say human beings are intelligent and computers can't be. But you didn't notice that the person you just had that long conversation with was really a computer!" Incidentally, I've seen some slight evidence that to pass the Turing test it helps to have the program simulate some sort of raving bigot. It can rave and ignore what people say and drag the conversation firmly back to its stupid beliefs, and nobody will expect anthing different. It doesn't have to actually say anything intelligent, just behave in the particular unintelligent way people expect of whichever sort of bigot it's pretending to be. It makes sense that the easiest person to simulate would be a rude, stupid person that they didn't particularly want to talk to anyway. So apart from the philosophical pooint that it probably isn't workable to say "intelligence is whatever humans do and not what anything else does" I don't see much value there. Useful machines will do things easily that people don't do easily, and if they include models of human beings at all the models will probably be used to predict what the humans will want, to be better ready to supply it.