Path: news.net.uni-c.dk!howland.erols.net!isdnet!grolier!freenix!sn-xit-01!sn-post-01!supernews.com!corp.supernews.com!not-for-mail From: Leonard Zettel Newsgroups: comp.ai.neural-nets,comp.lang.apl,comp.lang.awk,comp.lang.beta,comp.lang.cobol,comp.lang.dylan,comp.lang.forth Subject: Re: Einstein's Riddle Date: Mon, 19 Mar 2001 12:36:32 -0500 Organization: Posted via Supernews, http://www.supernews.com Message-ID: <3AB643A0.805F7ACA@acm.org> X-Mailer: Mozilla 4.75 [en] (Win95; U) X-Accept-Language: en,pdf MIME-Version: 1.0 References: <3AACB567.A59B8497@Azonic.co.nz> <3AACE6CF.7F05484D@ieee.org> <0W8r6.178$fo5.14165@news.get2net.dk> <3AAD60F3.120F284A@ieee.org> <3AAE371A.2F9F596F@brazee.net> <98m43a$fe2$1@localhost.localdomain> <3AAFB378.AB166E8C@ieee.org> <98q3f1$bid$1@localhost.localdomain> <3AB0DFC6.FC100A64@ieee.org> <98sq19$ton$1@localhost.localdomain> <3AB61B58.CF536ECA@brazee.net> <3AB6273D.E6C18536@ix.netcom.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: newsabuse@supernews.com Lines: 51 Xref: news.net.uni-c.dk comp.ai.neural-nets:67645 comp.lang.apl:29465 comp.lang.awk:17258 comp.lang.beta:12800 comp.lang.cobol:102924 comp.lang.dylan:24234 comp.lang.forth:78693 J Thomas wrote: > > Howard Brazee wrote: > > aph@redhat.invalid wrote: > > > > Sure, you could label a teapot intelligent if you wanted but no-one > > > would have to agree. The magic of Turing's test is that they would > > > agree. > > > But if you're writing an AI application, Turing's is probably not > > relevant. We don't need a computer to act like people, we have > > plenty of people available to do that. We need an AI program to do > > specific tasks with intelligence tailored to those tasks. > > Yes. I can see the Turing test as a philosophical device. You have to remember context. And what artificial intelligence used to mean. To start with *you* know you are "there" in a very intimate manner. "Cogito, ergo sum". -Descartes Which leads to the question: How do you know the other guy is there? After all, solipsism is a valid philosophical position. The naturalist position is that you can only make inferences from observed behavior (and other sensory data). Artificial intelligence used to mean making a computer be somebody - a self-conscious entity. As James Crick reaffirmed in his latest (very good) work recently, we still don't have an operational definition for consciousness, and that lack makes it very hard to study scientifically. So how do you know when you have created an artificial personality that can claim some kind of equal footing with yourself? Turing proposed you do it the same way you do it with Joe Schlunk - you converse and make a judgment. Formally, this usually includes expert reports and a court hearing. That may not be a very good answer, but I have yet to see a better one. The fur will fly when the evidence gets good enough to convince some and not others. Remember, some Spaniards justified their treatment of native Americans by claiming they were some kind of exotic animal, and not human beings. To the credit of the church, most clergy who went along argued otherwise. -LenZ-