Talk:Chatterbot
From Wikipedia, the free encyclopedia
Links to open source bots would be nice... seeking now -- User:DennisDaniels
I've got a very short chatterbot program (<30 lines) which I could add to the article. I wrote it myself in 1984 and as far as I'm concerned it's Open Source. -- Derek Ross | Talk 14:00, 2004 Jun 18 (UTC)
- Well, 33 including blank lines -- Derek Ross | Talk 03:35, 22 Jun 2004 (UTC)
SHRDLU was an experiment in natural language understanding, but it hardly qualifies as a chatterbot. The crucial difference is that SHRDLU did know what it was talking about -- or at least "attempted" to. Its purpose, unlike a chatterbot, wasn't just trying to convince human operators that a "real person" was on the other end (except indirectly -- but then any human being qualifies as one, too. :-) -- JRM 12:26, 2004 Sep 1 (UTC)
- Agreed. However it's probably worth mentioning it just to point out that it's not a chatbot. -- Derek Ross | Talk 02:57, 2004 Sep 2 (UTC)
-
- That gets a bit too specific, I think -- perhaps a general reference to natural language processing would be better (of which chatterbots are but a specific (and rather whimsical) instance). -- JRM 11:47, 2004 Sep 2 (UTC)
Contents |
[edit] Source code
The source code, even in QBASIC, is quite obscure — it looks pretty obscure to me, and I used to do the odd bit of programming in the language a few years ago! It's not going to be very helpful for 99% of our readers. I think we should, if not remove it outright, recast this as pseudocode, and, if it would be helpful, provide an external link to source code. — Matt 18:43, 20 Sep 2004 (UTC)
We could certainly recast the program as pseudocode as an aid to comprehension. However one of the reasons for writing it in QBASIC was to give an example chatterbot which would actually run "as is". A pseudocode version might be more useful to experienced programmers but perhaps less so to neophytes or non-programmers since it would be impossible for them to run it. By comparison an interested neophyte or non-programmer can get the current code running by following the simple instructions included in the current article.
An alternative to pseudocode would be to rewrite the program to make it clearer. For all the unusual layout of the program it is actually fairly simply structured, so that would not be difficult to do. -- Derek Ross | Talk 05:15, 2004 Sep 21 (UTC)
- One problem is that not every reader even has a QBASIC interpreter, (remember they stopped shipping it with late versions of Windows 98, not to mention non-Windows systems) nor even the knowledge of how to enter and execute such a program. Moreover, it's probably asking too much to expect a general reader to understand BASIC syntax, even if it was layed out correctly. A larger number of readers would have half a chance of reading pseudocode and getting the gist of it. If an adventurous reader wants to execute some code, I think an external link would do the trick. — Matt 09:45, 21 Sep 2004 (UTC)
Those are fair points but I still don't feel that pseudocode is enough. I contributed the code in answer to the request at the top of this page, so the source code is a response to demand. However if QBASIC is no good perhaps you can suggest a better language. Something like awk or perl perhaps ? -- Derek Ross | Talk 03:23, 2004 Sep 23 (UTC)
- For people who want to try out or implement a chatterbot, I think the set of external links to software for various platforms is sufficient. I'd point out that the request at the top of the page asked for "links to open source bots", not for actual source code to be placed within the article. I think including a simple chatterbot in pseudocode (and the resulting conversation) could be an excellent piece of illustration, but I don't think we should use Wikipedia as a repository for sample source code. — Matt 12:03, 30 Sep 2004 (UTC)
-
- I'm not sure if this conversation is dead or not, but I figured I'd add my two cents anyway. After running the code, I tried to rewrite it in VBScript (the native scripting language of my IRC client) so I could run it in an IRC channel. Now, I'm no newbie to programming, but I was pretty confused by most of the code -- I ended up mostly converting the syntax and trusting that it would work. It didn't, I figure I must have messed something up somewhere. My point is, it would be nice if the code was commented, or if the variable names were longer than one character... that way, I can learn and understand the code instead of simply be able to run it. Thanks for listening! AquaDoctorBob 14:30, 9 Jan 2005 (UTC)
-
- Okay, I do have a version which is longer but more conventional in appearance. It should be easier to translate into other languages or dialects. I'll upload it instead. -- Derek Ross | Talk 23:12, 2005 Jan 9 (UTC)
-
- Adding to the above conversation, I suppose I'm an "ordinary" person with no knowledge of programming code, and the stuff in this article baffles me. It's really not helpful at all to me, and I'd hazard a guess that more readers aren't programmers than are. As mentioned above, later WIN98 releases don't have a QBASIC interpreter, and it seems I fall into that category. I came here expecting an article about chatterbots. Perhaps the code would be better suited to a Wikibook on QBASIC programming? - Vague | Rant 07:06, Jan 18, 2005 (UTC)
- Fair enough. I think that part of the problem for non-programmers is that the article is really just a stub followed by an example of interest to programmers. In order to improve the article we need more informative material to counterbalance the example. I'll look at converting the WikiChat program into something that can be run more easily on modern systems too. That will take a few days to sort out. -- Derek Ross | Talk 15:37, 2005 Jan 18 (UTC)
- I have no knowledge of programming code, and having the full .BAS there did it for me. I had assumed such programs would be huge and complex. To see the code (it's not like I've read it; I've merely noticed how little text it is) and then to see what it can do taught me a lesson. If you ever decide to link it away, be sure to mention in the link text that it consists of only 90 lines. 22:43, 14 October 2005 (UTC)
Well, thanks, Anonymous User. I'm glad that at least one person has found the code informative. -- Derek Ross | Talk 05:43, 15 October 2005 (UTC)
[edit] Attention
In an effort to clean up Artificial intelligence, instead of completely removing a paragraph mostly concerning chatterbots, I copied it under "Chatterbots in modern AI". I noticed that it repeats a some information already in this article, but there might also be some additions. Unfortunately I can not spend time on a smooth merger right now. Sorry for the inconvenience. --moxon 09:20, 20 October 2005 (UTC)
[edit] Basic source code removed from prog (not essential to understanding of subject0
- Aids to understanding are often not essential even when they are useful to understanding. The BASIC source code below demonstrates that these programs can be quite short and simple, even to people who don't understand computer programming. I am surprised that anyone should think that it was intended as a tutorial on programming. It might have some tutorial value as an example of a chatterbot (the topic of the article) but hardly as an example of programming. -- Derek Ross | Talk 16:18, 20 November 2005 (UTC)
[edit] WikiChat -- a simple Chatterbot example
In principle a chatterbot can be a very short program. For instance the following program — which should be copied and saved as WikiChat.BAS — implements a chatterbot which will learn phrases in any language by repetition in much the same way that a parrot does.
WikiChat: DEFINT A-Z GOSUB Initialise GOSUB LoadData GOSUB Converse GOSUB StoreData SYSTEM Initialise: LET DictionarySize = 1000 DIM Context$(DictionarySize) 'The character sequences that WikiChat has already seen DIM Alternatives$(DictionarySize) 'The characters that WikiChat may print after recognising a sequence. LET EmptyRow = 0 LET EndOfResponseCharacter$ = CHR$(180) LET ContextLength = 6 'A bigger value makes WikiChat more grammatical but slower learning. LET CurrentContext$ = STRING$(ContextLength, EndOfResponseCharacter$) LET DictionaryFile$ = "WIKICHAT.MEM" RANDOMIZE TIMER RETURN Converse: DO LINE INPUT "Human: "; Response$ IF Response$ = "" THEN EXIT DO LET Response$ = Response$ + EndOfResponseCharacter$ GOSUB MemoriseHumanResponse LET Response$ = "" GOSUB GenerateComputerResponse PRINT "Computer: "; Response$ LOOP RETURN MemoriseHumanResponse: DO WHILE Response$ > "" LET CurrentCharacter$ = LEFT$(Response$, 1) LET Response$ = MID$(Response$, 2) GOSUB InsertCharacter LET CurrentContext$ = MID$(CurrentContext$, 2) + CurrentCharacter$ LOOP RETURN GenerateComputerResponse: DO GOSUB Lookup LET CurrentCharacter$ = MID$(Alternatives$(DictionaryIndex), INT(RND * LEN(Alternatives$(DictionaryIndex))) + 1, 1) IF CurrentCharacter$ = "" THEN EXIT DO ELSE LET CurrentContext$ = MID$(CurrentContext$, 2) + CurrentCharacter$ IF CurrentCharacter$ = EndOfResponseCharacter$ THEN EXIT DO ELSE LET Response$ = Response$ + CurrentCharacter$ END IF END IF LOOP RETURN InsertCharacter: GOSUB Lookup IF INSTR(Alternatives$(DictionaryIndex), CurrentCharacter$) = 0 THEN LET Alternatives$(DictionaryIndex) = Alternatives$(DictionaryIndex) + CurrentCharacter$ END IF RETURN Lookup: LET Context$(EmptyRow) = CurrentContext$ LET DictionaryIndex = 0 DO WHILE CurrentContext$ <> Context$(DictionaryIndex) LET DictionaryIndex = DictionaryIndex + 1 LOOP IF DictionaryIndex = EmptyRow AND DictionaryIndex < DictionarySize THEN LET Alternatives$(EmptyRow) = "" LET EmptyRow = DictionaryIndex + 1 END IF RETURN LoadData: OPEN DictionaryFile$ FOR APPEND AS #1 CLOSE #1 OPEN DictionaryFile$ FOR INPUT AS #1 DO WHILE EmptyRow < DictionarySize AND NOT EOF(1) LINE INPUT #1, Context$(EmptyRow) LINE INPUT #1, Alternatives$(EmptyRow) LET EmptyRow = EmptyRow + 1 LOOP CLOSE #1 RETURN StoreData: OPEN DictionaryFile$ FOR OUTPUT AS #1 FOR DictionaryIndex = 0 TO EmptyRow - 1 PRINT #1, Context$(DictionaryIndex) PRINT #1, Alternatives$(DictionaryIndex) NEXT CLOSE #1 RETURN
Note that to begin with, this chatterbot knows nothing and therefore says nothing. However if one uses simple conversation with it, like a parrot it will begin to reply as it starts to find responses that are appropriate for the immediately preceding sentence.
[edit] WikiChat learning sessions
The following sessions show how to teach WikiChat some simple conversation. Note that sessions are started by running the WikiChat program and ended by pressing the return key without giving a response to the computer. They show WikiChat being taught a basic conversational pattern in English.
(Session 1) Human: hello there Computer: Human: hello Computer: Human: how are you Computer: Human: fine thanks how are you Computer: fine thanks how are you Human: fine thanks Computer: Human: (Session 2) Human: hello Computer: how are you Human: fine thanks how are you Computer: fine thanks Human: glad to hear it Computer: Human:
(Session 3) Human: hello Computer: how are you Human: fine thanks Computer: glad to hear it Human:
The responses will be stored for the next session, so that there is no need to reteach them but it should be noted that while this example chatterbot will create "original" sentences after a little teaching, it has a very limited capacity and works best with short sentences.
The sections between the lines above used to form part of the article. They all belong together. The example session makes no sense without the code it refers to. -- Derek Ross | Talk 16:12, 20 November 2005 (UTC)
[edit] Title
Shouldn't this be at Chatbot, since that is the most common name? -- Visviva 11:09, 18 November 2006 (UTC)
- I'm on the Robitron e-mail discussion list where Loebner Prize Contest entrants and Loebner himself talk about these things. There, both "chatbot" and "chatterbot" are used, so I don't see one term as being clearly dominant among people who make and use them. (Where do you see "chatbot" as being most common?) I have no strong preference myself, either. "Chat" implies conversation, while "chatter" is both humorous and slightly negative because it implies meaningless talk. (In my opinion most such programs really are meaningless in what they say, so it's a valid criticism.) So, either one works. Even if the lead title changes, both names should be preserved so that they redirect to the same article; how did you do that? --Kris Schnee 19:12, 18 November 2006 (UTC)
- I also vote for chatbot, as it is the first and most commonly used of the names. Comparative use of the phrases on search engines seems to bear this out (for example: http://writerresponsetheory.org/wordpress/2006/01/15/what-is-a-chatbot-er-chatterbot/ ). Also, the phrase chatterbot and its promotion seems to have some underlying connection to Mauldin’s commercial chatbot (er, chatterbot) ventures. 66.82.9.110 00:09, 1 August 2007 (UTC)
- I disagree, Michael Mauldin (founder of Lycos) invented the word "Chatterbot" to describe natural language programs. Chatbot doesn't seem to have a specific origin nor can I find (and this is a very quick Usenet archive search) a mention of the word 'Chatbot' before the use of the word 'Chatterbot'. Perhaps we need a line in the top part of the page like "all too often shortened to Chatbot" 193.128.2.2 09:52, 1 August 2007 (UTC)
- That's exactly my problem with it...the phrase chatterbot is associated with Mauldin's commercial ventures and their seems to be a consistent push to market the term chatterbot that isn't backed up by its usage. In fact, as I pointed out above, by far, most people use the term chatbot. Also, the constant inserting of Mauldin's name in Wikipedia (for example, in his many times recreated and then deleted for irrelevance Wikipedia biography, a version of which you just linked to again, and other now editor deleted for self promotion Wiki biographies of Mauldin's company and company employees) consistently followed by some variation on the terms "Founder of Lycos" or "Creator of the Verbot" on Wikipedia, is pretty embarrassing. I hope he's not involved with it. As for inserting the phrase "all too often shortened to Chatbot," I'd just like to point out that on Wikipedia its considered bad form to edit articles that are about yourself or people you have a close personal association with, or involve a company you work or worked for or are/were associated with, even if they are the "Founder of Lycos" and "Creator of the first Verbot." By the way, is the rumour true that you get a dollar every time you say one of those phrases or get it inserted on the web? Because that would explain a lot. 66.82.9.77 10:57, 1 August 2007 (UTC)
- Firstly, I am not Michael Mauldin (which I suspect you think I am). One of the things that worries me is that this part of the discussion appears to becoming about the use of Wikipedia for self-promotion - something that I suspect you and I agree 100% on. I only fixed the wikilink following your message as I clicked through it and realised it was linking to the wrong person with the same name. I have never added any substantial content to the page (as you can see from my static IP) for exactly the Wikipedia form reasons you have stated. I just prefer the word Chatterbot, as that was what I called it when I released my first one as DOS Freeware eleven years ago. 193.128.2.2 11:49, 1 August 2007 (UTC)
- That's exactly my problem with it...the phrase chatterbot is associated with Mauldin's commercial ventures and their seems to be a consistent push to market the term chatterbot that isn't backed up by its usage. In fact, as I pointed out above, by far, most people use the term chatbot. Also, the constant inserting of Mauldin's name in Wikipedia (for example, in his many times recreated and then deleted for irrelevance Wikipedia biography, a version of which you just linked to again, and other now editor deleted for self promotion Wiki biographies of Mauldin's company and company employees) consistently followed by some variation on the terms "Founder of Lycos" or "Creator of the Verbot" on Wikipedia, is pretty embarrassing. I hope he's not involved with it. As for inserting the phrase "all too often shortened to Chatbot," I'd just like to point out that on Wikipedia its considered bad form to edit articles that are about yourself or people you have a close personal association with, or involve a company you work or worked for or are/were associated with, even if they are the "Founder of Lycos" and "Creator of the first Verbot." By the way, is the rumour true that you get a dollar every time you say one of those phrases or get it inserted on the web? Because that would explain a lot. 66.82.9.77 10:57, 1 August 2007 (UTC)
- I disagree, Michael Mauldin (founder of Lycos) invented the word "Chatterbot" to describe natural language programs. Chatbot doesn't seem to have a specific origin nor can I find (and this is a very quick Usenet archive search) a mention of the word 'Chatbot' before the use of the word 'Chatterbot'. Perhaps we need a line in the top part of the page like "all too often shortened to Chatbot" 193.128.2.2 09:52, 1 August 2007 (UTC)
- I also vote for chatbot, as it is the first and most commonly used of the names. Comparative use of the phrases on search engines seems to bear this out (for example: http://writerresponsetheory.org/wordpress/2006/01/15/what-is-a-chatbot-er-chatterbot/ ). Also, the phrase chatterbot and its promotion seems to have some underlying connection to Mauldin’s commercial chatbot (er, chatterbot) ventures. 66.82.9.110 00:09, 1 August 2007 (UTC)
[edit] Malicious Chatterbots section of the page
I don't have the knowledge to contribute to this section, wish I did, but it doesn't seem to have much authority to it, no cites of statistics or links to articles, so it comes across as too anectodal to be of any use. Especially the part that says "as well as on Gay.com chatrooms". Why is that reference somehow more notable than 'bots that appear on any of a thousand other forums?dawno 05:21, 18 June 2007 (UTC)
[edit] Relevance of paragraph on the philosophy of AI within article & notable names in the field
I think the paragraph about Malish should be removed. It doesn't apply specifically to chatterbots. The following paragraph (discussing Blockhead and the Chinese Room) belongs in Philosophy of artificial intelligence.--CharlesGillingham 10:14, 26 June 2007 (UTC)
- I absolutely agree with you, it smells of self promotion. I've removed the Malish bit but will leave the other move up to someone with expertise in that field. 66.82.9.80 21:26, 31 July 2007 (UTC)
-
- I've now corrected some of the language and accuracy within these paragraphs in light of the recent edits, and also removed some misconceptions. I tend to agree that the sections discussing the philosophical arguments within AI really belong in Philosophy of artificial intelligence and not here as such, as they are not merely limited to chatterbots. Also, Malish's work in the field seems to be more centred around Human decision making, rather than AI specifically (referenced here in a paper by the UK MoD presented before a US Department of Defense conference)[1]. Although, I highly doubt that the notable names added here by various anonymous users (Turing, Searle, Malish, Block) would seek, or even require, any "self promotion". It seems to me, to be much more a case of over-zealous editing by enthusiastic followers of their respective works.
-
- Finally, I removed the quote claiming that Jabberwacky is "capable of producing new and unique responses". Jabberwacky in fact, can only repeat sentences that have been previously input by other users. This was probably an earlier reference to "Kyle", which is actually one of very few programs that can actually achieve this (which was probably the rationale for its original inclusion here). 79.74.1.97 16:51, 1 August 2007 (UTC)
-
-
- Just comparing the two versions of the article: good rewording. Many thanks. 193.128.2.2 08:57, 2 August 2007 (UTC)
-
I've just come into the Wiki business (wockham, for William of Ockham), and apologise if I've got anything wrong in respect of how to use the Wiki.
I changed the section on AI research to try to make it reflect more faithfully how things really stand in the research world, for example:
1. AIML is not a programming language, but a markup language, specifying patterns and responses, not algorithms. And ALICE can't really be considered an AI system, because (as both the other content of this section and the initial section point out), it works purely by very simple pattern-matching, with nothing that can be called "reasoning" and hardly any use even of dynamic memory.
2. Jabberwacky can't properly be described as "closer to strong AI" or even really as "tackling the problem of natural language", because it doesn't actually make any attempt to understand what's being said. It is designed to score well as an imposter - as something that can pass as intelligent - rather than even attempting any genuine grasp or processing of the information conveyed in the conversation. It can give the impression of more "intelligence" than other chatbots, sure, because it does do a rudimentary kind of learning, but again, it seems very misleading to suggest that this really has anything significant to do with natural language research.
3. The previous version suggested that it's the failure of chatbots as language simulators that has "led some software developers to focus more on ... information retrieval". But this seems odd, as though such developers were desperate to find a use for chatbots, rather than (more plausibly) trying to find a way to solve an information retrieval problem. My version maintained the point that chatbots have proved of use in information retrieval (as also in help systems), but deliberately avoided any speculation about how those researchers might have come to have such interests.
4. I made substantial changes to the paragraph that said: "A common rebuttal often used within the AI community against criticism of such approaches asks, 'How do we know that humans don't also just follow some cleverly devised rules?' (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument." Here are my reasons:
(a) The argument that chatbots are moderately convincing, and therefore perhaps humans converse in the same way, is unlikely to be put forward by anyone "within the AI community". AI researchers are aiming to achieve some sort of genuinely intelligent information processing, and they are well aware of the serious difficuly of doing so. Only chatterbot enthusiasts are likely to come up with this argument, and most of them are engaged on a quite different task (see 2 above).
(b) The argument is anyway very weak, and I don't think it's fair to attack my rebuttal of it as just expressing a personal point of view. Maybe it could be put better, but the point I was making is that even an everyday conversation - for example, about what to wear or about football - requires some logical connection between the various sentences (e.g. what shirt will go with what skirt or trousers, or how the placement of one player in the team will have implications for other positions - e.g. that the same player can't be in more than one position). Now it is just obvious that this sort of thing is typical of human conversation, and equally obvious that chatbots (at any rate in their currently usual form) cannot handle such logical connections. So if it's worth putting the argument in the article, then it's also worth putting this obvious rebuttal of it (though again, it could no doubt be reworded).
(c) The stuff about Block and Searle was inaccurate. It suggested, for example, that John Searle's Chinese Room argument was "an example of this line of argument" which it isn't at all. Searle isn't arguing that human conversation is like chatterbots; on the contrary. But nor is he suggesting that AI systems are as crude as chatterbots: if he were, then nobody would take his argument seriously. What he's saying is that even if a computer system could achieve a logically coherent conversation (i.e. even if ambitious AI researchers could succeed), that still wouldn't give genuine semantic content to what the system says. All this really belongs in the section on Philosophy of AI. The most that could be said here (and it could be added) is that chatbots (arguably) provide some evidence against the usefulness of the Turing Test. If even a pattern-match-response chatbot can fool a human into thinking that it's intelligent, then obviously the ability to fool a human isn't any good as a criterion of intelligence.
Wockham 21:30, 31 August 2007 (UTC)
-
- I know you mean for the best, but you can't jump into an established article on Wikipedia and completely rewrite a large section of it without any consensus from the other long time editors. You don't have any sourcing for a lot of your claims and a lot of it is pure POV (examples: "despite the "hype" that they generate in some media" and "But the answer is clear"...). In fact, you've undercut most of the main parts of the article with sentences beginning with "But..." I know you think the article is inaccurate, or wrong in relation to "how things really stand in the research world", but academia isn't the only user of Wikipedia, or chatterbots, for that matter. There are other views on the subject that the article is trying to balance, and every opposing side is convinced the other is wrong. Also, the link to the chatterbot Elizabeth and its accompanying long, promotional sounding paragraph is particularly an egregious act; a quick glance at the edit history would show that many much more famous and influential bots have been ruthlessly removed from the article to prevent it from bloating uncontrollably. Again, I know you didn't mean it in bad faith, but such links generally get editors reported for SPAM and blocked from further editing. We absolutely don't link to such bots here, there is a separate article for that. 72.82.48.16 22:15, 31 August 2007 (UTC)
OK, thanks very much for this. I've got rid of the "despite the hype" and "But the answer is clear" stuff, and also the link to Elizabeth (before reading your note, in fact). I would hope, however, that the references to "help systems" and the potential of chatbots in education would be worthy of consensus (even if references to examples violate protocol).
In a section called "Chatterbots in Modern AI", and starting "Most modern AI research", I should have thought it important to reflect what is actually happening in the research world; that was why I confined my edits to that.
Regarding claims that are "unsourced", I honestly can't see that what I put is any worse than what was there before. What claims do you think need sourcing, that currently aren't?
Wockham 22:31, 31 August 2007 (UTC)