Talk:Turing test
From Wikipedia, the free encyclopedia
[edit] Coby or Colby
Citation 19, says Coby but the reference says Colby. Anyone know what the correct author name is? —Preceding unsigned comment added by Kevin143 (talk • contribs) 09:26, 25 May 2008 (UTC)
[edit] Topic
We already have Turing Test, so the two articles should be merged. Should it be capitalized? AxelBoldt
Someone wrote:
- So far, no computer has passed the Turing test as such.
But I read somewhere that a museum of computers in Boston conducts an annual Turing test competition, and that they've managed to fool "some of the people some of the time". Anyone know more about this? --Ed Poor
I don't know many useful details here. I do know that my psych professor claims the Turing test has been passed, but is not passable today, because people have become more discerning in their judgements. But maybe this was taking "Turing test" in a more liberal sense, e.g. taking being fooled by ELIZA to mean that ELIZA passed the Turing test. --Ryguasu
- I think that it has to do with greater discernment and exposure to software and to concepts of artificial intelligence since the Turing test requires a human judge. Ember 2199 06:44, 1 August 2006 (UTC)
On the other hand, the intelligence of fellow humans is almost always tested exclusively based on their utterances.
- Anyone else think this is problematic? It seems there are many not-so-verbal ways to "test" intelligence, e.g. does X talk to walls?, can X walk without falling down?, can X pick a lock?, can X learn to play an instrument?, can X create a compelling sketch of a scene?, etc..
--Ryguasu
-
- Think the difference is between 'test' and '"test"'. I.e. when intelligence is formally tested, the scores are usually based on verbal answers or even multiple-choice ones; but when intelligence is informally assessed by casual observers, they use all kinds of clues.
-
- -Daniel Cristofani.
I just modified the "History" section slightly. Pretending to be the other gender was a feature of the Imitation Game, not of the Turing Test itself, and Turing's original paper only mentions the five-minute time limit when talking about how often computers might pass the Turing Test in the year 2000.
Ekaterin
[edit] Objections and replies
The "Objections and replies" section seems to be a list of objections to the fact that machines could think, and not objections on whether the test actually answers that question. This is confusing and missleading. Maybe the title should be modified to reflect this fact. The following section, moreover, does seem to discuss on possible objections on the test. --NavarroJ 12:04, 11 August 2005 (UTC)
- I commented on this below, to take an example: "One of the most famous objections, it states that computers are incapable of originality." (italics added), but there is no explanation in the article (yet) of how the test demonstrates that humans are original, or if the test is relevant for originality. Ember 2199 06:44, 1 August 2006 (UTC)
-
- I think this whole section should be taken out. If we need a seperate article detailing the issues brought on by artificial intelligence that's fine, but it doesnt belong here. Jcc1 20:48, 16 March 2007 (UTC)
-
-
- This section probably could go in the article on Turing's paper. However, I think they should be at least touched on here. I believe his primary motivation in proposing the Turing test was to make easier to for readers to visualize his answers to these objections. His goal was to make it seem plausible that, in the future at least, people will agree that "machines can think." (Forgive me for indenting the previous post). ---- CharlesGillingham 06:56, 24 October 2007 (UTC)
-
[edit] Heads in the sand
The new Heads in the sand note makes some claims about what Turing said that I've never seen before. I think that either we need a reference, or to take it out. Rick Norwood 12:41, 21 September 2005 (UTC)
Since nobody has steped forward to support the claims in the "Heads in the sand" paragraph, I'm deleting it. Rick Norwood 14:44, 29 September 2005 (UTC)
Which brings us to the paragraph on "Extra Sensory Perception". Any evidence or support for the idea that Turing believed in ESP? Rick Norwood 14:48, 29 September 2005 (UTC)
Yes, there is a large section on it in his paper describing the Turing test. You should probably read the paper before making too many edits! --Lawrennd 20:45, 30 September 2005 (UTC)
Thanks for the info. My knowledge of Turing comes from secondary sources, which is why I'm careful to post ideas here where more knowledgable people can comment. Rick Norwood 23:23, 30 September 2005 (UTC)
[edit] Wikipedia and the Turing Test
Computer Scientists, please convert the Wikipedia search box to process natural language. Thanks. - MPD 09:05, 28 December 2005 (UTC)
[edit] Voight Kampff
does anyone have an objection to having a link to Voight-Kampff machine in the see alsos? It is the test from Blade Runner to test for replicants. WookMuff 20:59, 9 March 2006 (UTC)
- I, for one, have no objection. Rick Norwood 21:11, 13 March 2006 (UTC)
- I also have no objection, and I think it would be interesting to include something like a "references in pop culture" type section. If I recall correctly, didn't an episode of the Simpsons spoof Turing or allude to the test? - IstvanWolf 23:20, 10 May 2006 (UTC)
[edit] Expansion
I'd like to suggest expanding the article into the premises of the Turing test. Does anyone know of any rigorous analyses of the premises? I also want to affirm the earlier comment that the criticisms section seems to not really discuss the fundamentals of the Turing test, which is what this article should focus on, for example how judges are chosen, the criteria for judgement, time period, format, breadth/scope of topics. Ember 2199 06:31, 1 August 2006 (UTC)
[edit] Human Computer Interface
Turing was trying to make things easy for the machinists by proposing a "simple teletype interface". Consider other forms of interaction, such as first-person gaming. Can you tell when playing CS:Source online who is a bot and who is a human? (if yes, usually only because the humans are stupid!)
http://en.wikipedia.org/wiki/Computer_game_bot http://www.turtlerockstudios.com/CSBot.html
My home desktop already makes a datacentre-class machine of 2001 vintage look quite tame, yet can support a number of these bots. This year's crop of datacentre machines are a ten-fold advance.
I haven't yet seen a machine "demonstrate learning" (rather than fool someone that it is human). This is usually the diversionary tactic that is deployed to deny the machine has passed the test.
Is there a link to Asimov? Multivac was very like Google... all you need to do is ask the right question.
[edit] Lack of Clarity
I think the descriptions on this page are not clear enough and hence are misleading.
Did Turing really say that The Turing Test is a test to see if a computer can perform human like conversation? A Turing Test could easily have communications that we would not typically call conversation, such as collaborative creativity. Hence I think the basic description should refer to a test for "intelligence".
The description says that both the computer and the human try to appear human. Unless I'm wrong (comments invited), the point of the game is for both the computer and the human to try to convince the third party that they are the human. Which is not quite the same thing as the present discussion does not make the competitive nature of the game clear.
In the section on the imitation game it is said 'In this game, both the man and the woman aim to convince the guests that they are the other.' This is clearly incorrect, as if we had a game where the male tried to convince the observer that they were the female, and the female tried to convince the observer that they are the male, then the observer would know whom is whom!
For the examples where it is claimed that a computer may or may not have passed the Turing test, it would be useful to say whether a proper version of the Turing test has been applied. I have heard (but cannot immediately provide references) that a lot of people believe that the test is for an observer to talk to an agent through the channel, and then say whether the agent was human or computer. This is a much, much, easier task, as there is no competitive angle where the human will use more and more of their intelligence to beat the machine. I also cannot see how ELIZA could come anywhere near passing the true form of the Turing test, as any human observer could easily win by simply showing an ability to talk about something different from the simple single topic that ELIZA is capable of.
I haven't written anything in this page, so I do not wish to just dive in and start changing things. But, I think that the points I raise should either be refuted, or that changes to the article should be made. —The preceding unsigned comment was added by 80.176.151.208 (talk) 17:44, 7 January 2007 (UTC).
- You are right. The description is incorrect. I am looking at Turing's paper right now and it says
The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
[edit] What if...
My only question is that of the human. What if the human were to act like a chatbot? The Turing study is flawed because it doesn't account for the possibility of human deception. It is well known that human's decieve, for whatever purpose. This is a variable that needs to be taken into consideration. When we want to do a scientific study, we must account for all margins of error, and include every possible variable within the study. Simply having one person and one bot is not sufficient. This is only two groups. You must have a third group, a person that is not aware of bot technology, a human that is aware of bot technology and the bot. The Judge would then need to determine which one was each. Furthermore, I doubt that one judge is sufficient either. For it would also make a difference as to whether or not the judge was familiar with bot technology or not. You would need a pannel. This pannel would need to be "fooled" in the majority that they were not speaking to a bot when in fact they were. The Turing study, although a good start, is not accurate, because of what I had mentioned: 1. The knowledge capabilities of the human, 2. The knowledge capabilities of the judge, 3. The variable of human deception not being accounted for, 4. Only two control groups. Most psychologists have already discovered that three control groups are necessary to better evaluate human behavior. The same condition would apply here as well.
- Having a person act like a bot doesn't really make too much sense from where I am standing. If there is a human control it should be trying to make the tester think it is more human than the bot. We don't need to know if it is possible for a human to behave like a chat bot; they can. As for the other judge groups, I can sort of see your point, BUT you are explicitly asking them to judge if the entities are human or computer, in effect telling them the technology exists. I am not sure if the fact they didn't know about the technology before would change the results; Many internet users are already familiar with bot-like behavior, so it makes sense. The problem I see with this even is that it is steadily going to become more impossible to find anyone not familiar already with the technology. You could use a bunch of old people or people from LDCs, but that would skew the data as well.--Shadowdrak 17:05, 2 June 2007 (UTC)
[edit] In defence of Lady Lovelace
Behind every good computer is a programmer that loves it. The "originality" clause would be a key point of differnetiation between human and machine. Ask a person the same question three times and you get three different answers; ask a computer and it is likely there would be one reply, potentially phrased three different ways. By the third question the human would have guesses that what was being asked was contextual, ironic or specious and not specific and the answer would be returned in kind.
Computers, with humans behind them programming the response patterns, can expect these types of situations. When you then go to the AI level where there computers learn and teach each other, the is illogicallity (originality) would be re-factored out. At least you would hope so. Stellar 03:12, 12 August 2007 (UTC)
[edit] racist material. must be removed immediately
Additionally, many internet relay chat participants use English as a second or third language, thus making it even more likely that they would assume that an unintelligent comment by the conversational program is simply something they have misunderstood, and are also probably unfamiliar with the technology of "chat bots" and don't recognize the very non-human errors they make. See ELIZA effect.
it foolishly assumes that all non english speakers are technologically ignorant and do not know what chat bot is (almost all non english speakers i met happen to know what chatbot is)
i just removed the racist material. do not revert it to what it was
- I didn't know about that meaning of the word racist... Thanks for teaching it to us! —Preceding unsigned comment added by 86.218.48.133 (talk) 14:01, 17 October 2007 (UTC)
[edit] Discussion of relevance
Isn't this section a near perfect definition of that most hated thing, original research cluttered with weasel words? No citations, sentence openings taken almost verbatim from the "what not to do" page on weasel words. Just all around poor. I wont edit it out, but it surely needs a rewriting with some citations ? VonBlade 23:10, 10 October 2007 (UTC)
[edit] From Russia with Love!
A russian online flirting website has a chatbot, which passes the Turing test. They use it to dupe single guys and get financial info out of them for fraud: http://www.news.com/8301-13860_3-9831133-56.html
If we could combine such a russian software with a japanese humanoid robot body and soup-up its looks a bit (big tits, mini skirt, sailor suit, saucer sized eyes, neon hair colour) suddenly all those catgirl animes would become documentaries ... 82.131.210.162 (talk) 08:45, 10 December 2007 (UTC)
- See more here, it managed to fool a well-respected scientist:
- http://drrobertepstein.com/downloads/FROM_RUSSIA_WITH_LOVE-Epstein-Sci_Am_Mind-Oct-Nov2007.pdf —Preceding unsigned comment added by 82.131.210.162 (talk) 11:13, 10 December 2007 (UTC)
[edit] Removed reference to multiplayer games
Computer game bots generally are for playing the game and are not designed for conversation. —Preceding unsigned comment added by Sbenton (talk • contribs) 00:04, 14 March 2008 (UTC)
[edit] Almost perfect!
Excellent work by User:Bilby to make this into a great article. The only problem I see with it now has to do with overall structure and consistency. The older sections need to brought up to the same standard as the sections by Bilby, and some of the older material needs by tossed or integrated into the newer sections. "Weaknesses of the test" should probably acknowledge in some way that this material has been partially discussed above. "Predictions and tests" should probably be integrated into the "History" section above in some abbreviated form. "Variations on the test" should be brought up to the same standard set by "Versions of the test", and so on. "Practical applications" (IMHO) should be tossed. This kind of work would bring the article up to FA status in no time.
Also, I wonder if User:Bilby would be interested in improving Computing Machinery and Intelligence? It just needs a page or two. ---- 19:01, 25 April 2008 (UTC)
- Thanks for the kind words. I haven't finished here yet - I was thinking that both strengths and weaknesses warrant further expansion, as you suggest, and that something on whether or not the Turing test constitutes an operational definition of knowledge would be nice. I've been letting it sit for a bit so that other editors could fix my mistakes - which they've been doing. :) I've also recently come across some literature using the Turing test in odd ways outside of AI, so I'm curious as to whether or not it would be applicable. If anyone is interested, the article concerned, Gaming and Simulating EthnoPolitical Conflicts, uses what it describes as a Turing test between actions of people roleplaying and the actions of those involved in the actual events, but I'm not sure where, or even if, it fits in here. - Bilby (talk) 00:27, 26 April 2008 (UTC)
[edit] Restricted vs Unrestricted test
The article appears to fail to define the difference between the "restricted" and the "unrestricted" tests (at least I couldn't find it defined.) WilliamKF (talk) 19:37, 11 June 2008 (UTC)