Talk:Artificial intelligence/Archive 2
From Wikipedia, the free encyclopedia
Clean up
The page says "answers to questions by entered by the visitor" (has an extra 'by') but when you go to edit it it is not there. by! Numerous requests for deletion/move of sections:
- Splitting AI into neat groups is a futile exercise. For example, Bayesian networks is a very general term and can include neural networks, Kalman filters and many other models. Evolutionary methods and fuzzy systems are certainly not 'connectionist' and iterative learning is not a defining characteristic of any of those categories. Delete categories, provide alternate format? I could try and write one up. User:Olethros
- It's not normal IMHO to find a AI in the UK section on this page. I haven't erased it because I'm new here but I guess it won't interest most users. (unknown user)
- The "Machines displaying some degree of intelligence" section is silly.... (see "Delete setcion?")
- The section I marked "Cynicism" is NOT TRUE. These are subsets and applications of modern AI and soft computing
- The section I marked "Spinoffs" relates to ancient symbolic AI and should rather be moved to 'AI history'
- The section "Modern AI" sounds like terminology which opposes 'classical AI'. Instead it discusses weak symbolic (classical) chatterbot AI. This paragraph should be moved to 'chatterbot'
- Links should be moved to a 'List of ? related work/organizations/research institutions/implementations'
- If there are no objection I will make these changes moxon 17 October 2005
The italised paragraphs still need to be reworked/redistributes. BrokenSegue, thanks for the help so far. I hope the changes so far are agreed on. I tried not to delete any accurate info, only moved it to more appropriate locations. --moxon 00:45, 21 October 2005 (UTC)
- moxon, watch out for links to redirects. For example you linked to "sci-fi" which is an artcile about the word. It's better to use the more formal science fiction. Also, when italicizing use ''italic text'' and not the html tags (it's just wiki's style). We certainly have lost a lot of content. What locations have you been moving this information to, this article could stand to be longer IMHO. Broken S 01:22, 21 October 2005 (UTC)
- The largest part has been moved to History of artificial intelligence. One paragraph was moved to chatterbot since this is a rather insignificant part of AI. The phylosophical issue have been summed up in the corresponding subpage. Most of the rest is included in linked lists. This is all also reachable from the portal. --moxon 01:55, 21 October 2005 (UTC)
- moving content to sub articles is all fine and good, but we must follow Wikipedia:Summary style when doing so (the intro for history of AI should be more substantive, giving more of an overview of the history). I agree the chatter bot stuff was ripe for moving though. The philosphy article is quite short- perhaps i'll work on buffing that up. Broken S 02:18, 21 October 2005 (UTC)
- The largest part has been moved to History of artificial intelligence. One paragraph was moved to chatterbot since this is a rather insignificant part of AI. The phylosophical issue have been summed up in the corresponding subpage. Most of the rest is included in linked lists. This is all also reachable from the portal. --moxon 01:55, 21 October 2005 (UTC)
- Had , and I think it's a alien word for a lot of people, so I made it reference to wiktionary --AzaToth talk 18:49, 15 November 2005 (UTC)
IMHO not AI
I do not agree with the claims made in "Expectations of AI". I am for strong AI and IMHO this section is not objective. Since the Turing test we have all "realised" that there is a difference between intelligence and human intelligence. AI world domination might not be on our hands, but many agree with Warwick in concept. Does AI still have a bad name? Only in fiction!
Further "AI languages & Programming Styles" discusses "if-then" statement and randomizor. I would rather consider this "mimicing intelligent behavior", certainly no learning or adaptation. Since some might disagree with me on this, I have left these sections in the article for now... --moxon 03:46, 21 October 2005 (UTC)
replace bozo
I was about to edit a previous addition when it was reverted. I will reword and reinsert it. Lets see if it sticks this time... --moxon 22:36, 27 October 2005 (UTC)
vandalism
To 198.104.64.238, please undo your vandalism. Please stop adding nonsense to Wikipedia. It is considered vandalism. If you would like to experiment, use the sandbox. Thank you.
- Save yourself the trouble of asking a vandal to undo his work and rather just revert it yourself. --moxon 19:10, 31 October 2005 (UTC)
Catagorizing
Olethros, I agree that neat groups are futile and the phrase "CI involves iterative learning of connectionist system" might be somewhat limiting. Maybe it should be preceded with "generally". On categorization: Scruffy vs. Neat AI is a classical textbook classification, which I think classifies 2 very distinct approaches, but if you have another suggestion, please put it up for the discussion.
Why would you disagree with the statement that a 'fuzzy system' is 'connectionist'? Fuzzyfication, inference and defuzzification can all be represented through a set of simple, interconnected, parameterised feed-forward computational units...
- Many things can be represented as a connectionist system. To me, it seems more similar to a probabilistic system though - a sort of informal Bayesian network. See below.--Olethros 19:49, 21 December 2005 (UTC)
Which of the learning methods under CI do you not consider as iterative? Even one pass learning is still a repetitive process of stepping through the training data, increasing accuracy with evey step. --moxon 19:10, 31 October 2005 (UTC)
- I'd like to take neural networks as an example. The process is an optimisation process, with a cost function that is defined over all the data. Usually the process of optimisation is iterative, and a lot of times there is incremental optimisation, by optimising at each single example (this is called stochastic optimisation usually - see stochastic gradient descent methods - and is frequently used in online learning). However, there formal procedure is to look at all the examples in order to find out what the cost function is - and for certain types of systems it is possible to analytically find a solution (i.e. for RBF networks, or any other type of generalised linear system). Iterative methods usually involve iterative optimisation or stochastic approximation - or both. --Olethros 19:49, 21 December 2005 (UTC)
- Looks like there can be more categories [1]
I disagree with this categorisation. Expert systems have traditionally been deterministic decision-making systems and prone to breakage. Case-based reasoning is similar if I remember correctly. However it is possible now to apply standard probability theory to make optimal decisions, as long as this is tractable. I guess tractability had been a problem in the past. These probabilistic models are usually called Bayesian networks in this context. So, what is called Conventional AI in that section is not that at all. It is not symbolic either. Bayesian networks can employ any types of variables, including continuous ones, and non-observable ones.
Then, Bayesian networks are typically parameterised and are optimised through an iterative process.
Fuzzy systems had once been popular, but they hardly differ conceptually from probabilistic systems. They are nice if you want to hard-code some game-logic, however. Fuzzy systems had been initially successful because they allowed you to say that 'X is Medium means that 'X is definitely medium when between 2 and 3, otherwise not defintitely so.. and it is definitely not Medium when under 1 or over 4' - but people in the field for some time had not realised that you can do exactly the same kind of definition of Medium using a probability distribution P(X|Medium) - what is the probability that X will take a particular value given that it's Medium - and this the reason why there is very little interest these days in fuzzy systems. And of course, it is possible to represent a fuzzy system as a connectionist model too, but that's not an inherent property.
Now, in my view, the term 'symbolic' is not very valuable these days, as it lacks formal meaning. If you want to say 'discrete variable' you can go ahead and say so. But thinking along the symbolic/non-symbolic (and fuzzy, which I guess people would consider somewhere in-between) lines is a historical accident that only creates confusion. What is interesting computationally is:
a) The type of variables whose relationships are modelled
b) The aspects of the variable that are modelled: For example in pure symbolic AI a variable is either realised or it is not, while in probabilistic models events always have probabilities
c) The goal of the model (pattern recognition, decision making, control, planning, optimisation, estimation ...others?)
However, for the casual reader, I think it is easier and more useful to make distinctions simply by application, then categorising the models in each case according to the approach used.
Clearly, speaking about AI is not an easy task. --Olethros 19:49, 21 December 2005 (UTC)
Article is worse than it was back in 2002...
While it certainly wasn't perfect back then, compare this old version to the current dog's breakfast (no offence your undoubted efforts, guys, but's it's an incoherent mess). I wonder whether it's worth using an old version as the basis from which to proceed with further edits, and if, from when. --Robert Merkel 05:56, 23 November 2005 (UTC)
- I agree Robert. Ever since this one I've been trying to trim it down, but it seems to just again grow out of control. I created a portal carrying the essentials. It doesn't get editing traffic. I also suggested a few sub-atricles to move non-essentials to, but it still doesn't solve the problem (e.g. the abstract of the history page is still to long). I'll look at the reference you gave and we can try trimming the page down without loosing stuff. --moxon 10:11, 23 November 2005 (UTC)
- I am afraid current version is still not better than both the older versions referenced. And the reason is not for it beeing too long. Problems I see are the following:
- first definition is tautological - AI is defined as intelligence exhibited by an artificial entity. - and in fact says nothing
- sci-fi is not that important to mention it in the first paragraph
- there are too many wikilinks which makes text hard to read, some important links can go to See also section and some can easily disappear (like behaviour, or economics)
- links to weak/strong artificial inteligence are dead (e.g. from Behavior based robotics) and possibly some other, due to triming down
- Sorry for criticizing and doing nothing, but my english is not that good and ai is not my field of interest. Hope someone interested will improve it. --Tomash 15:25, 1 February 2006 (UTC)
The Book of the Machines, Butler
"Artificial intelligence (AI) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer."
Butler tries to point out that it doesn't have to be anything elaborate; that machines already have an intelligence, a subversion of us unto them, by the simple act of us having to tend them. I remember reading it when I was having trouble with Windows - the number of hours one spends daily to keep them running, or to make them useful; our lives have become subordinate to them. But Butler's talking about spinning jennies and piston engines and even simple things like waterwheels; they have a purpose of their own which is incrutable to us, but we have become subverted to it. Gave me chills when I was reading it during the dot.com bubble. It's the reason for the Erewhonian revolt in Butler's books as well as for the Butlerian Jihad in Dune. I sorta expected to see literary refs to AI here; or a link to a page on them maybe.Skookum1 08:54, 26 November 2005 (UTC)
Image
No offence, but a PCB in the silhouette shape of a human brain would only add confusion to what AI is. How about rather just a fictional robot? --moxon 13:08, 2 December 2005 (UTC)
Refactoring this page
Here be suggestions for wholesale refactorings of this page. This page is really quite heavy and misleading in many respects. What we want to do is to give an accurate, but not detailed, picture of AI to non-experts.
--Olethros 18:34, 21 December 2005 (UTC)
Nomenclature and taxonomy
How canonical is the division of AI methods and/or philosophies into "Conventional AI" and "Computational Intelligence", apparently introduced to the page around October 2005? This seems like a classification introduced (introduced to the field of AI, not just to Wikipedia) for polemical purposes, by advocates of a particular school of thought, and not a taxonomy that is universally accepted within the field. Mporter 23:25, 22 January 2006 (UTC)
- I personally do not find this division useful in any way. It is highly misleading. This is discussed extensively in this section of the talk pages.--Olethros 11:29, 23 January 2006 (UTC)
Using the goals and history of AI as a introduction and framework
The section could go like this:
AI has traditionally been viewed as 'making intelligent machines'. The next question is 'what constitutes intelligence'? There seem to be two ways of thinking:
- Any machine/program that can solve problems (especially those that would normally require a human to solve) is a form of AI.
- Any machine/program that learns from experience is a form of AI.
(The first case is frequently conflated with the goal of imitating human (or animal) behaviour.)
Historically, both ways of thinking had been used, IIRC, even from the early days. However, the two philosophies are not contrasting each other, even though frequently it had been the case that people saw those two philosophies as antagonising. In practive they are not and the reason is quite simple:
The second definition is not complete. What we need is:
- Any machine/program that learns from experience to solve a problem (or a set of problems) is a form of AI.
To 'solve a problem' is necessary - otherwise, any alteration of a program's state given data could fit the definition, even if this alteration was not used to perform a particular task.
Now the only difference between the first and second defintion is the inertion of the term 'learns from experience'. However, the degree of learning allowed to solve the problem is a design decision - it is not merely an absence or presence of learning, but a continuum of possibilities, from a 'fixed' program, to a completely general learning method. As learning increases, however, tractability becomes reduced.
Now, as far as learning is concerned, it is generally been easier to perform inference on discrete rather than continuous variables; the development of the field seems to me (though I am not a historian of AI in any measure) to have been largely influenced by
- The fast development of the interval estimation methods in statistics, long before any complete distribution models became tractable through mathematical and computational advances.
- The emergence of a large number of ad-hoc practicioners, whose aim was to make working systems with whatever they had at their disposal. Some of those for example believed that they could simply hard-code an intelligent agent. Others used limited forms of learning, without much formal justification for their methods.
- A large part was played by the development of search algorithms, which seemed to offer a quick way out of the need to learn - by quickly evaluating all possible outcomes, it would be easy to pick out the best. Ahem.
These days, most of the AI fields have been unified to a large extent. Every type of system can be formalised probabilistically - this does not mean that there exists a probabilistic model corresponding to any given ad hoc AI technique, but rather that there exist close (and theoretically optimal) equivalents based on probability theory.
So, finally, my suggestion is to categorise things according to the type of system. I see the following categories:
- Expert systems (i.e. diagnosis, decision making) - These include the 'classical' deterministic expert systems; but note that the same role can be played by Bayesian networks (of which neural networks are a special case, though they are rarely formulated as such)
- Pattern matching systems - Again, we have the classical models, plus more recent ones, including a) probabilistic and b) large margin classifiers (i.e. SVM). These are not all that different from expert systems.
- Control systems - These are perhaps the most ambitious, as they must exhibit a degree of autonomy. They include game-playing machines and so on. These are also the only systems in which planning can be involved. The topics I guess are dynamic programming, stochastic optimal control and reinforcement learning.
Comments: (moxon)
- Up above you mentioned: "very little interest these days in fuzzy systems". From my engineering perspective fuzzy systems is still a very relevant for non-linear control. Neuro-fuzzy, genetic-fuzzy, fuzzy clustering etc. are used for characterization and rule extraction. Sure probability distributions 'can' be used, but FL is meant to simplify this. Please comment.
-
- OK, point taken. So it should be made clear that FL systems are useful in practice for simplifying the design of a system, rather than being significantly different from other systems. In my opinion, the relationships between systems should be emphasised overall.--Olethros 13:17, 7 January 2006 (UTC)
- Whouldn't you agree that both expert systems and pattern matching systems are simply classifier systems? Alternatively, what about combinatorial optimizers (i.e. EC) for scheduling, resource assignment, theorem proofing, game theory etc.
- The neats vs. scruffies debacle is still put forward in the introductory text of many AI books, after which vary little attention is given to NN, SVM, FS & EC, giving them their "conventional" nature. Even though not relevant or wrong in where it was trying to differentiate, these schools (up to very recently) do exist. Maybe moving away from this categorization is what makes modern AI unconventional. I think your suggestions are welcome as an introduction to this article, but the current headings, though maybe not that relevant, also have their place. I am looking forward to your edits.--moxon 06:08, 24 December 2005 (UTC)
Thanks for your replies. Let me try and answer each point:
- Agreed. So it should be made clear that FL systems are useful in practice for simplifying the design of a system, rather than being significantly different from other systems. In my opinion, the relationships between systems should be emphasised overall.
- Yes, they are both classifier systems. In a probabilistic framework (for example) they would both give you the posterior probability of some random variable. I tend to view the combinatorial optimisation as a sort of search/optimisation. Searching and planning is definitely part of AI.
- I guess in modern times the unification across different methods has been largely achieved. The current debate seems to be centered mostly in the benefits between different formalisms, i.e. probabilistic vs non-probabilistic approaches, analytic approximation vs sampling, estimation vs search etc. However this debate is a matter of specific first principles rather than an a matter of abstract concepts. One of the earlier revisions of this article also indicates that 'neat' AI is the kind where the system is completely designed by hand, while 'scruffy' the kind where the system's parameters are adjusted. Is this the actual definition? Then it would limit 'neat' AI to only search.
I hope to start editing soon, but at the moment I am busy putting neural network and artificial neural network in a clearer state.
--Olethros 13:17, 7 January 2006 (UTC)
- In agreement I also have a book showing that NN,FL and SVM are statistically all the same. In practice my lecturers are in joking disagreement of which is a special case of which. About the "neats designed by hand" question... I'd find myself in a debate for agreeing, but think of this: chess playing programs are still considered as great examples of AI. All the best, --moxon 19:25, 7 January 2006 (UTC)
- Of course - after all, chess games use one of the AI cornerstones, search. There have been some attempts to use an adaptive evaluation function (I think NeuroChess used reinforcement learning with an MLP for evaluation), but I am not sure which programs are stronger these days: ones that concentrate on doing a good search, or ones that concentrate on doing a good evaluation? Which brings up another question, which I have not seen answered: what are the guarantees for search with cut-offs when the evaluation function is noisy?--Olethros 11:52, 8 January 2006 (UTC)
- Now your questions are getting hard. Seems like you're much more of a theorist than me. Sounds like the noise would give the system the unpredictable/unguaranteed behaviour more commonly seen in scruffy systems. Coming back to the previous Q: machine learning in the neat sence often involve database (rule/predicate) extention rather than parameter tuning, which also mean that everyting that has been learnt can easily be unlearnt. Maybe the distinction should lay somewhere here? --moxon 23:12, 9 January 2006 (UTC)
- First, minor clarification for those that follow the talk: parameter adaptation systems can have guarantees for convergence also, depending on the model and the adaptation method. So, parametrised vs parameterless systems? Sure, it's one dimension. Maybe we should simply recognise that there are many such dimension: logic vs probabilities, parametrised vs parameterless, search vs estimation, incremental vs erm.. non-incremental, ad-hoc vs derived from some principle, trying to emulate some aspect of human intelligence vs trying to solve a particular task and so on.. --Olethros 22:09, 9 January 2006 (UTC)
- True, but wouldn't you agree that many of these are generally related: parameterised/probabilistic/incremental/ad-hoc vs. parameterless/logic/derived? --moxon 23:12, 9 January 2006 (UTC)
- I wouldn't lump them into groups, because I think it is misleading. A model can be defined in a probabilistic or from a logical framework (it just changes a set of assumptions) and it can be parametrised or not (changes the class of models you can use) it can be incremental or not (depends) and the search/optimisation method, estimates and update equations can be derived from either some formal framework or be ad-hoc, or both. I think a better separation would be in terms of 'points of view', i.e. the point of view of optimisation, of search, of estimation, of probability, of control, of game theory and so on. Working systems simply make use of techniques developed in these fields. If I absolutely had to divide everything into two categories, I'd have 1) principled methods 2) ad-hoc methods. However, most models used have variants that belong to either 1) or 2). For example there might be two neural network models which are almost exactly the same apart from the fact that one changes its parameters in an adhoc way, and the other uses a stochastic Gauss-Newton optimisation with a mean-square error cost function for its parameter updates. Why would I choose this division? Because ad-hoc methods cannot be explained properly as they have no theory behind them. You can only argue about them. So, for a wikibook about AI, historically important ad hoc methods should be mentioned, and then it should be explained why they are ad hoc, and what is the most similar formal method to them, so that readers can gain a deeper understanding.--Olethros 00:37, 10 January 2006 (UTC)
About search
If AI implies learning (IMO it should) which implies optimization (of some error fu) which implies search (of an optimal solution), then AI is all about search. Note that this is search during training, not search during operation (as is true for chess programs and why i don't concider them intelligent). However there is a problem with this statement. If a classifier learns by dirently adding samples one by one to a database or a growing network I see no search. I guess this kind of learning is not really "optimal"... Regading the statement about planning/scheduling being search... Yes, but I consider it as an independent application. it is neither classification nor control. --moxon 23:12, 9 January 2006 (UTC)
Artificial intelligence, in my opinion of course, is about two things: 1) creating a model of the world 2) using the model to perform a task
Here are some standard cases:
Standard chess playing programs: The model is fixed, and you are using it to guide search. The model consists of the constraints on the possible moves, plus the evaluation of board positions. This model defines a graph, which can be searched with standard graph search techniques.
K-nearest neighbour classifier: The model is simple: the label of any example x, is equal to the majority label that its k-nearest neighbours have. Thus, the model consists of three components: K, the number of neighbours, which is fixed, the distance function, which is fixed, and the data. The model is then used to classify new points.
Adaptive chess playing programs: The model changes with experience, and you are using it to guide search.
Stochastic adaptive control: You have a model of the environment, which you learn while you are controlling the system. You use the model to guess what will happen when various actions are taken (a planning step) so that it can take the action that is best. If the model is invertible then you can simply solve for the optimal action. It is possible even to combine this with a model that gives the optimal action given the control input.
So, what you said before about 'not intelligent' make sense in this context. You want to have a system that can build a nice, compact model of the world. A classifier that simply adds examples, like the k-nearest-neighbour classifier, does not have a compact representation. It does not result in general 'law' about the world.
Planning and scheduling can be said to fall under control theory, or the other way around. Funnily enough, many problems in control can be formulated as shortest path problems.
So, I was going to recommend this definition:
Artificial Intelligence is about systems that use models of the world in order to solve real-world problems.
Funnily enough, it is similar to a definition of engineering or anything else, really :)
--Olethros 01:06, 10 January 2006 (UTC)
- For that matter control can also be seen as classification if which action to take, or classification as the control of a selecter, and any of the techniques can be used for either. However, some solutions are better for some problems. For instance a GA on its own is good for solving a scheduling problem, but not for control. A neural net the other way around. Probably because of different shapes of the error function, because of different types of applications. --moxon 19:50, 10 January 2006 (UTC)
Consider also the CS:Source bots. In the previous iteration the maps needed to be waypointed; there is a lot of work going on to make the bots learn a map instead, and also to have some emotional state represented. They are evolving quite quickly, but so far they are not evolving themselves (I know: this would be artificial life, not AI :-) ). The Steam forums are interesting reading, even if you are a pacifist or can't enjoy shoot-em-ups.
Wikibook Plug
I've started a serious effort to create a comprehensive AI-wikibook at [2]. It's currently in the 'planning' stage. It could use some input from people that have experience with the field. risk 17:16, 29 December 2005 (UTC)
Sentient computers
I have created the above page - for specific examples etc. Jackiespeel 19:39, 11 January 2006 (UTC)
Bioethics
could AI be considered a bioethical topic
- I think that depends on your definition of bioethics. Questions like:"Can machines feel pain?" have a ethical consequence, but I don't know about the 'bio-' part. --moxon 15:26, 17 January 2006 (UTC)
- No (pardon the determinism). Yet, like any field of research it might has its implications. Continuous breakthroughs open the way for new applications in fields where bioethics are critical. For example, you may use AI techniques in storing genome data. Then, what will be your use of such data? This is a bioethical topic. Likewise, using Expert Systems for medical diagnostics. Would you consider it, in itself, a bioethical topic? If you are researching bioethics and came here, I believe you will waste your time! Connection 22:46, 8 March 2006 (UTC)
Techniques and Implementation
CfC. Techniques and Implementation has no proper coverge, here, nor in the AI Portal. Only some techniques are presented sporadically in History. There should be a specific Section to overview Techniques, there Implementation, and link to there respective Articles. Comments please. --Connection 23:18, 8 March 2006 (UTC)
Schools of thought
I think the schools of thought section is very misinforming and arbitrary. What the section is really doing is showing the different ways of acheiving artificial intelligence. The different computational methods. But it makes no refrence whatsoever to the different types of intelligence that can be achieved, that is real, Strong AI, and classical weak AI, which I believe is a far more important topic than the insignificant and somewhat arbitrary distinction of how to computationally achieve certain AI objectives. I think the schools of thought section should introduce the difference between classical Weak AI fields such as pattern recognition, chatbots etc. with strong AI that attempts to achieve real human-like intelligence. The current schools of thought section can be moved to a new article called AI methodology --Jake11 18:22, 15 March 2006 (UTC)
- Artificial intelligence is primarily seen as a science, a branch of computer science, creating systems with adaptive/intelligent behavior, as any textbook on the subject will show. In the past there has been two approaches to achieving this goal, as stated in this section. The definition of true intelligence and types of intelligence that can be achieved is a phylosophical matter of opinion which gave rise to the ongoing strong/weak AI debate which is, as you can see on it's discussion page. The progress that has been made in the science of AI is independent of how it is type casted. --moxon 17:11, 16 March 2006 (UTC)
- Nevertheless, that's no reason to act as if a high level overview of AI includes JUST the schools of thought on how to achieve AI. I think the philisophical aspect is just as important, as it really is not so philisophical that it cannot relate to achievement of AI. I think the first section after the intro should be a "Road map to AI" Which introduces ALL aspects of AI from all perspectives in a heirarchial list fashion.--Jake11 18:17, 16 March 2006 (UTC)
- No acting regarding overviews intended. Simply, in the science of constructing AI there has traditionally been these 2 schools of thought. When constructing such a "road map", please try to avoid POV. By the way, strong AI is not only about human-level intelligence, but rather the computationability of what we lable as true intelligence, at any level... Why would you remove the link to Cognitive Robotics? --moxon 12:42, 17 March 2006 (UTC)
Comparison of predictions and reality...
Shouldn't the page contain at least some discussion of the gap between some of the more breathless predictions about AI (both historical and current) and the more modest, but still very useful reality? If so, any suggestions as to how best to do so? --Robert Merkel 02:23, 26 April 2006 (UTC)
- Singularity Timetable may be a tad too optimistic:
- 2006 -- True AI
- 2007 -- AI Landrush
- 2009 -- Human-Level AI
- 2011 -- Cybernetic Economy
- 2012 -- Superintelligent AI
- 2012 -- Joint Stewardship of Earth
- 2012 -- Technological Singularity
- I agree in principle, but isn't all brainstorming about the potential of a new technology just fantasy until the hard science is accomplished? For example, should every automobile article be hounded for the missing "flying cars" that were so prevalent in the 1950's? --qswitch426 16:45, 26 April 2006 (UTC)
-
- It's a fair point, though. When people think of "cars", they don't think of flying cars (well, at least I don't). When they think of AI, though, they probably think of HAL. As most AI work is on much more mundane topics, I'd say there are few diciplines with such a large gap between perception and reality. Not sure how I'd change the article, though... — Asbestos | Talk (RFC) 17:46, 26 April 2006 (UTC)
Another good definition, problems and expectations of AI
Arificial Intelligence should be considered as a field to amplify human intelligence. In present century we have advanced computers which can calculate complex calculations in fraction of seconds, but still these machines can not be programmed to behave and train like humans because they lack the speciality what humans have "The massive parallelism of Neurons". Normally in humans these ranges from 10 to power to 10 neurons interconnected by each other ranging in thousands. These massive neurons are not very quick compared to present generation nano technologies but they are much better to get trained when it comes to learning massive and very invarrying things. The adjustment and reweighting of weights of Neural networks is the biggest step is learning process. Writing a good algorithm in a field is a specialized manner to create a terminology called "AI", this is very constained, the specilization of AI is now unconstrained, when we consider humans they are more generalized they can adapt and create their own alogorithms to master any tasks. Therefore working with a few specialized algorithms can not be considered as true AI, "Learning" by focussing on bigger concept of genelization in order to deliver a true adaptable algorithm can only be considered as potential candidate of next generation of AI. The knowledge to think in logical way which diffentiate ourselves with machines is also a major concern in delivering true AI. When we talk of about Neurons then we are talking about ability to generate some actions based upon sensory data obtained from human sensors. Thus all memory that we humans possess is based upon these weights between these neurons. Therefore we learn and re-weight these neurons as we go through our experiences. Learning the "Tree" word and associating it with a picture and other usefull information is based upon same concept of re-weighting weights of interconnected Neurons.
For newbies who wants to portrait a real picture of problems with AI, then here is a puzzle try to understand the problem with natural language understanding. How humans communicate in lesser words, when we try to express it, when it may take a complete wikipedia behind it to explain these words. Think about "When you encouter a problem search wikipedia", now lets say this statement, how can you train a machine or computer to do this. No you can't expect computer in present generation can't understand this as the each word means alot and snergetic effect of whole string is an action which will result into further many actions which is more a complex process. Therefore the knowledge behind every such statements must be made readily understandable in order to deliver a part of Human Intelligence called "Natural language understanding". Now think about other intelligent features of human being.
Compared to computers we humans are more trainable because we already possess enought knowledge and can easily linkup to new knowledge and learn more, by again re-weighing weights of already interconnected and pre-weighted Neurons specialized to perform on that specialized tasks. The neuron arrangements and process of organized addition of trainable knowledge is a very hard task to implement, which is major road-block in creation of true AI, but I think with efforts of knowledge base projects(which are working on collecting an organized human knowledge set) and fast growing nano technology (For replicating massive Neural network), we will be able to achive a true AI agent in time to come.
--Singhashish37 06:47, 1 June 2006 (UTC)
Why there is no mention of "On Intelligence"?
I mean, one written by Jeff Hawkings. I think it's one of the better books in this topic. Links: http://onintelligence.org/ http://numenta.com/
Sorry that I cannot really contribute - busy at work and not a native English speaker.
Clippy is a questionable example of 'State of the Art' AI
Clippy is a questionable example of 'State of the Art' AI ~
- Agreed, so is Deep Blue. It's doesn't learn. And why no article on Micin? --moxon 09:18, 23 June 2006 (UTC)
Is 20q a good example of "state of the art AI"?
http://www.20q.net Yes it is a toy and recently undergone some major marketing/licensing "upgrades" but I personally think that this web site is a great example of AI. The website is the front end for a learning program based on a neural network. I'm a wikki newb so I'll wait to see if anyone comments here and if no one has anything negative to add I'll add it in. Vespine 03:47, 23 June 2006 (UTC)
An idea for new model of AI
I would write little points due to lack of time (sorry): If we are heading for intelligent machines. Then we must make them enable
- To take decisions on their own, learn from failures(must make mistakes to learn from failures).
- Certainly be able to remember the results gained from previous independent search(quest) for the reason of failures for future references.
- Must have ability to explore in more dimensions than the digits can do.
For the results needed in above mentioned lines. We need a machine model other than the conventional machines used now adays. The computational machines are greatest obstruction in the development of any mature modal of AI.
There must be a different model of machine that can stimulate human psychology in order to deduce resulsts and also be able to make mistakes.
Human pshycology is very much far away from digits, numbers and text messages. It works with stimuli. And the most important stimuli it uses is the image. Image is the base of the human thinking. When we say a word lets say a Mango immediately we recall an image in our mind rather than a text message. Similarly a series of images is stimulated when we imagine any word like "emergency". Whenever one talk about AI he/she imagine a series of images pertaining self deciding machines what ever image he/she stimulates. Hence there should also be diversity in the machines as well for stimulate exact human psychology. The more we go after computer controlled AI the more we will be distracted from real AI.
Diversity, mistake, choice, faults, learning, quest, research, evaluation, decision and emotions can only be stimulated in best possible ways using images, colors and ..... etc.
Ahmad thd 14:21, 27 June 2006 (UTC) 27-06-2006 Monday 15:20 contact : ahmadthd@gmail.com
The Intelligence Continuum
±--Hisands 20:57, 2 July 2006 (UTC)Intelligence
There has been a lot of interest in questions like: What is Intelligence? Who (or what) is intelligent? When, why, and how did intelligence arise? While reading about these issues in the years since the Turning test,, I thought about the mental process of discrimination. It occurred to me that the elementary unit of human intelligence is perhaps a single act of discrimination. What we call intelligence may be though of as a complex of lengthy chains (including parallel segments) of discriminations, i.e. stimulus-response chains at the neural level.
This notion may be extended to the lowest level of physical particles. Particles are mutually stimulated by other particles or energy sources and discriminate between themselves. Or, a particle discriminates itself from others and responds according to the laws (known or unknown) of particle physics. As matter and energy interact - chains of discriminations, - more complex entities evolve, including organic compounds, cellular organisms, mutations, and ultimately vegetable and animal life. Humans developed the abilities of reflection and self awareness and named the collection of their mental abilities intelligence. The continuum of Intelligence may be divided into Inorganic and Organic, which in turn may also be subdivided. In the current era, Artificial Intelligence may be viewed as an advanced form of Inorganic Intelligence orchestrated by human intelligence but largely executed by inorganic devices.
If for no other reason than that of convenience and continuity we could (and probably should) think of intelligence as that which enables all matter and energy to distinguish itself from all else. At the most complex level of discrimination, we find the quality commonly called human intelligence. This does not prevent recognition of less complex levels of intelligence - even at the lowest level of matter and energy. (Matter and energy are equivalent manifestations of phenomena inextricability bound, as far as we know.)
So. Q. What is intelligence? A. One or more acts of discrimination. Q. Who (or what) is intelligence? A. Every material or energetic entity. Q. When, Why and How did intelligence arise? A. When: At the beginning of the existence of matter/energy. Why and How? We don’t know.
Why Artificial?
"Artificial intelligence (abbreviated AI, also some times called Synthetic Intelligence) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer."
- Why is a computer an "artificial" entity? Although created ("synthesized") by human beings, a computer is surely as real an entity as any other. Computers are also described as synthetic because they have been created. But human beings were also created by genes, by the evolutionary process, etc.
- I don't understand the use of the word "artificial" at all.
-
- Purely a semantic argument, do you perhaps not realise that a common definition of the word "artificial" is in fact "man-made"? http://dictionary.reference.com/browse/artificial = adj. 1.a. Made by humans; produced rather than natural. Vespine 04:56, 7 July 2006 (UTC)
-
-
- Semantic, yes - but I am trying to get at something deeper.
-
-
-
- "1.a. Made by humans; produced rather than natural."
-
-
-
- A honeycomb is produced by bees, yet is considered something "natural". A computer is created by humans but is not considered natural. Why is human activity seen to be outside of nature? Don't we naturally create these items in the way that apes create simple tools?
-
-
-
- The word artificial has conotations beyond the "man-made" definition that I find unhelpful and misleading to the general public. For example, a common perception is that a computer could never acheive consciousness because it is "artificial". But if every neuron and inter-connection in a human brain was replaced by equivolent man-made technology, there would be no functional difference. The AI would believe that what it experienced was consciousness, as we do. -195.93.21.133 16:27, 7 July 2006 (UTC)
-
-
-
-
- Well I suppose the argument could be that we do not as yet have the capability to fully understand the human brain, let alone re create 'human made' equivalents, so this really is a moot point. Human activity is seen to be outside of nature perhaps because without humans, these things would never take place, or anything even close. If you try to argue that a bee making a honeycomb or a monkey using a rock as a hammer is analogous to inventing microprocessors and microwave communication, you should be writing articles about biology and philosophy, not about Artificial intelligence ;). I'm not saying the 'arguments' presented here are completely invalid, they are of course part of the ongoing discovery, but the fact they are arguments, by definition, disqualifies them from being 'encyclopaedic'. The Philosophy of artificial intelligence already deals with the 'debates' surrounding most of this field. The term Artificial Intelligence may be slightly misleading to the average layman, but that's by far the most widely used term, calling it something more esoteric would be more misleading, not less. Vespine 00:49, 17 July 2006 (UTC)
-
-
- Synthetic Intelligence is a better term, although I believe human intelligence is also synthetic intelligence.
- "Man-made Computer Intelligence" may be a bit long-winded, but at least it gets away from the illusory idea of artificiality... --195.93.21.133 00:02, 7 July 2006 (UTC)
-
- I use the phrase "Algorithmic Intelligence". It retains "AI" and places emphasis on the processes exhibiting intelligent behavior. --qswitch426 11:16, 7 July 2006 (UTC)
Although there is no clear definition of AI (not even of intelligence), it can be described as the attempt to build machines that think and act like humans, that are able to learn and to use their knowledge to solve problems on their own.
A 'by-product' of the intensive studies of the human brain by AI researchers is a far better understanding of how it works.
The human brain consists of 10 to 100 billion neurons, each of which is connected to between 10 and 10,000 others through synapses. The single brain cell is comparatively slow (compared to a microprocessor) and has a very simple function: building the sum of its inputs and issuing an output, if that sum exceeds a certain value. Through its highly parallel way of operation, however, the human brain achieves a performance that has not been reached by computers yet; and even at the current speed of development in that field, we still have about twenty years until the first supercomputers will be of equal power.
In the meantime, a number of different approaches are tried to build models of the brain, with different levels of success.
The only test for intelligence there is, is the Turing Test. A thinking machine has yet to be built.