Path: news.net.uni-c.dk!howland.erols.net!sunqbc.risq.qc.ca!news-hog.berkeley.edu!ucberkeley!news.colorado.edu!not-for-mail From: Howard Brazee Newsgroups: comp.ai.neural-nets,comp.lang.apl,comp.lang.awk,comp.lang.beta,comp.lang.cobol,comp.lang.dylan,comp.lang.forth Subject: Re: Einstein's Riddle Date: Mon, 19 Mar 2001 09:21:03 -0700 Organization: UCB Lines: 63 Message-ID: <3AB631EE.858EC6DC@brazee.net> References: <3AACB567.A59B8497@Azonic.co.nz> <3AACE6CF.7F05484D@ieee.org> <0W8r6.178$fo5.14165@news.get2net.dk> <3AAD60F3.120F284A@ieee.org> <3AAE371A.2F9F596F@brazee.net> <98m43a$fe2$1@localhost.localdomain> <3AAFB378.AB166E8C@ieee.org> <98q3f1$bid$1@localhost.localdomain> <3AB0DFC6.FC100A64@ieee.org> <98sq19$ton$1@localhost.localdomain> <3AB61B58.CF536ECA@brazee.net> <3AB6273D.E6C18536@ix.netcom.com> NNTP-Posting-Host: brazee.cusys.edu Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Trace: peabody.colorado.edu 985018864 4532 204.228.68.77 (19 Mar 2001 16:21:04 GMT) X-Complaints-To: abuse@colorado.edu NNTP-Posting-Date: 19 Mar 2001 16:21:04 GMT X-Mailer: Mozilla 4.7 [en]C-CCK-MCD NSCPCD47 (Win95; I) X-Accept-Language: en Xref: news.net.uni-c.dk comp.ai.neural-nets:67641 comp.lang.apl:29460 comp.lang.awk:17254 comp.lang.beta:12796 comp.lang.cobol:102919 comp.lang.dylan:24230 comp.lang.forth:78685 Your post is interesting and made me think of how we discriminate against groups. If that person from that other group is indistinguishable from people from my group - then he is a person and OK. But if he persists in his old cultural habits we can look down at him. Sometimes these are language styles - Stallion's accent sounds like he's dumb to many Americans, and if a NBA player's speaking style sounds lazy, we call him lazy. We use the same types of prejudice with Turing's test. We test to see if it is OUR type of intelligence, rejecting different values. Turing's point is more applicable to "sentient". This is a word without any measurable meaning. We combine sentiency with intelligence to set our morals (our kind is most important, and this is how we define our kind). So how do we decide whether a being is sufficiently like us to be protected? If we can't tell the difference! J Thomas wrote: > Howard Brazee wrote: > > aph@redhat.invalid wrote: > > > > Sure, you could label a teapot intelligent if you wanted but no-one > > > would have to agree. The magic of Turing's test is that they would > > > agree. > > > But if you're writing an AI application, Turing's is probably not > > relevant. We don't need a computer to act like people, we have > > plenty of people available to do that. We need an AI program to do > > specific tasks with intelligence tailored to those tasks. > > Yes. I can see the Turing test as a philosophical device. In Turing's > time england still had a lot of prejudice about ethnic groups whose > members looked just like the people who discriminated against them. And > so somebody might say "I don't like , they're all stereotype>" and somebody else could say "Well, you know Joe, that > you've been casually friendly with all this time, he's > and you've never complained about him being ." And > of course Joe would be very thoroughly assimilated. > It doesn't work as well now that the are more likely to > be physically distinguishable. So anyway, I can easily see Turing > wanting to apply the same standards exactly. "You say human beings are > intelligent and computers can't be. But you didn't notice that the > person you just had that long conversation with was really a computer!" > > Incidentally, I've seen some slight evidence that to pass the Turing > test it helps to have the program simulate some sort of raving bigot. > It can rave and ignore what people say and drag the conversation firmly > back to its stupid beliefs, and nobody will expect anthing different. > It doesn't have to actually say anything intelligent, just behave in the > particular unintelligent way people expect of whichever sort of bigot > it's pretending to be. It makes sense that the easiest person to > simulate would be a rude, stupid person that they didn't particularly > want to talk to anyway. > > So apart from the philosophical pooint that it probably isn't workable > to say "intelligence is whatever humans do and not what anything else > does" I don't see much value there. Useful machines will do things > easily that people don't do easily, and if they include models of human > beings at all the models will probably be used to predict what the > humans will want, to be better ready to supply it.