Talk:Artificial neural network
From Wikipedia, the free encyclopedia
[edit] Neural Networks
someone changed "compute a true gradient" to "compute the true gradient". why? is there only one true gradient? i don't think so. the former text was correct.
What about radial basis networks? --FleaPlus 16:45, 14 Apr 2004 (UTC)
- In my opinion, the definition of neural network in Wikipedia has gotten way too broad. In the 1980s, the term "neural networks" referred to a whole field of study, where people used neuron-like elements to perform computations. About 1987-1989, researchers realized that most of these computations were statistical. The field was reborn as "machine learning", and neural networks became the label for a particular machine learning algorithm/model, namely the multi-layer perceptron and its variants. The article usage is thus about 20 years out of date.
Therefore, the following algorithms are not neural networks and do not belong in this article:
- Radial basis functions
- Support vector machines
- Boltzmann machines
- Committee of machines
- Self-organizing map (gray area: this algorithm was very popular in the 1980s, and is expressed as "neurons")
- Instantaneously trained neural networks (quite obscure and idiosyncratic, not a well-known algorithm)
I would like to move these out of this article and put them into the "Approaches and algorithms" list under supervised learning. -- hike395 05:52, 15 Apr 2004 (UTC)
- You sound like you know what you're talking about. I'd say do it. It's a rather large article regardless... :-) - Omegatron 19:17, Dec 6, 2004 (UTC)
-
- I would disagree with that. While each of the algorithm categories IMO deserves its own page, I'd say that by standard conventions, they are all types of neural nets. Personally, I'd like to see this page be split into a number of sub-pages, covering static feed forward, temporal (dynamic) nets, SVMs, competitive...) while having the main page as a generic overview of the core principles and applications of the various algorithms.
-
-
- HAYKIN , S., "Neural Networks: A Comprehensive Foundation," Prentice Hall, Upper Saddle River, NJ, 1999.
-
-
- As for instantaneously trained neural networks, I agree that it's certainly too obscure for such a prominent position. I would exile it to a page of its own ;-) --Denoir 01:50, 7 Jan 2005 (UTC)
-
-
-
- SOMs are not supervised learning algorithms, they are unsupervised. --Spazzm 22:54, 2005 Apr 6 (UTC)
-
-
I have a question about one of the things on the page, it says that "certain functions that seem exclusive to the brain such as dreaming and learning, have been replicated on a simpler scale, with neural networks." My question is how exactly have a neural network been able to dream? It's seems to me to be quite a human quality to dream. What did it dream about? Computers? Numbers? perhaps it had a nightmare in which the Riemann Hypothesis was disproved? PLEASE clarify that or provide a source, otherwise it should be deleted.
- They "dream" to get rid of bad local minima or bad recognition states. Kind of a silly usage of the term, but here are the references:
- HOPFIELD, J. J.; FEINSTEIN, D. I.; PALMER, R. G.. "Unlearning" has a stabilizing effect in collective memories, Nature, vol. 304, pp. 158-159, (1983).
- HINTON, G.; DAYAN, P.; FREY, B.; NEAL, R.. The Wake-Sleep Algorithm for Unsupervised Neural Networks, Science, vol. 268, pp. 1158-1160, (1995).
- -- hike395 04:52, 22 Jun 2004 (UTC)
I would like a more expanded introduction, for people with little experience in related fields. A very simple example would be nice, too. All this talk of "nodes" and "functions" but I can't really follow what's actually going on. Label the diagram with "weights" and "nodes" and "functions". Is each circle a node, that sums the weighted (lines) numeric values that enter their inputs, put them through a function and output them? it should be more clear. - Omegatron 19:17, Dec 6, 2004 (UTC)
- Hmm, this page could certainly be improved upon. Perhaps split in a couple of sub-pages covering various categories of nets (i.e static, dynamic, self-organizing, associative.. etc). Also it would benefit from more practical examples of applications of anns.
[edit] perceptron values
On line 42 there was a change from -1 to 0. Neural Networks can work in both cases form [0 1] or [-1 1] or for that matter between any two real number. There is some emperical research going on that shows that -1 to 1 works with less epochs, but since this is a encyclopedia not a research paper it should clearly state that any value can be used. There is another error though with this: the threshold and the lower bound cannnot be the same, so the if you want to use the bound [0 1] then a threshold of 0.5 could potentially be used. For [-1 1] then 0 could be your threshold. --Tim 12 Dec 2004
- (I moved this down here to keep with convention, I hope that's OK) I also went ahead and took a shot at fixing the section. Jmeppley 17:14, 16 Dec 2004 (UTC)
[edit] ANNs Good/Bad
I split up this section to discuss the removal of the following (which was here already) --Sp00n17 04:08, Dec 29, 2004 (UTC)
- I went ahead and removed it. — B.Bryant 05:00, 29 Dec 2004 (UTC)
"As a machine learning technique, Artificial Neural Networks are both inelegant theoretically and unwieldy in practice, and therefore have very little merit. The interest in ANNs - which had its ups and downs over the decades - seems to be mainly motivated by the appeal to the analogy with the brain and by inertia."
- Some reasons and alternatives, please. - Omegatron 02:37, Dec 29, 2004 (UTC)
-
- Currently, there are plenty of good reasons why not to use neural networks in a variety of possible applications. I do think it would be a good idea to collect and describe the current drawbacks, as well as, document the historical drawbacks. Talking about some of the general drawbacks would balance out the article as it would be the opposite of the "usefulness" section. --Sp00n17 04:08, Dec 29, 2004 (UTC)
-
-
- That's fine by me, but let's not go with the AC's ill-informed rant. — B.Bryant 05:00, 29 Dec 2004 (UTC)
-
-
-
-
- Yeah, I agree completly. I'm glad that got removed. When I read it, it sounded like some cranky old engineer posting. One who saw them hyped in the day and fall out of favor for whatever came next. --Sp00n17 19:19, Dec 29, 2004 (UTC)
-
-
-
-
-
-
- Before discussing the drawbacks of NNs, can you propose at least one advantage of the technique? I can't think of any. Here are two clear disadvantages: NNs are hard to analyze theoretically and are hard to train in practice. Alternatives: As far as I am aware, SVMs are superior to NNs in every way that matters. (BTW, I have no particular axe to grind here - I just feel that the facts are against NNs. It seems like a fad, even if it is a recurring fad.) -- Cranky old engineer
-
-
-
-
-
-
-
-
- I suggest you read up on both SVMs and NNs. First of all, they're variations on the same adaptive principles as far as classification go - kernel machines are often considered to be a type of ANNs. Second, if you are refering to SVMs vs feedforward/backprop nets then you should know that SVMs can't be used for function approximation - the most useful application of adaptive systems. Like with anything else in the world, neural nets have their operating limitations. If you know how and when to apply them, they can be very powerful tools. The black-box approach can lure people to think that they're some form of magical solution to everything - they forget that the selection of data becomes a critical and non-trivial issue. The neural net hype is/was never underserved, but neural nets and their requirements often are.
-
-
-
-
-
-
-
-
-
-
- Your answer is evasive (the "read up" comment is nothing but academic one-upsmanship). The comparison with SVMs was regarding classification - you did not even attempt to claim any advantage of NNs over SVMs. For function approximation there are again alternatives that are better, both in theory and in practice - e.g., various robust regression methods. You wrote that the black-box claim is just a lure - that is true, but once you admit that, there is nothing left in favor of NNs. -COE
-
-
-
-
-
-
-
-
-
-
-
- Offtopic: Please use the signature button at the top of the wiki editor at the end of a post. --sp00n17 01:47, Jan 8, 2005 (UTC)
-
-
-
-
-
-
-
-
-
-
-
- Just to put my 2 bits in: What is wrong with ANNs theoretically? In fact it is possible to formulate ANNs as a fully Bayesian method. Then, why are they 'difficult to analyse theoretically'? They are extremely simple models. What aspect of some other model is easy to analyse theoretically that is not possible to analyse in ANNs? I really don't know, so please do tell me :) - I know that they are difficult to train, most of the time, because the related optimisation problem has many local minima. But I never had any problems getting adequate solutions - usually the problem is overtraining. SVMs have other problems, such as a computation time that increases quadratically with the number of examples. Of course, you could train them with gradient descent instead of quadratic programming if you wanted to... but then there wouldn't be much difference with an ANN really. And of course gradient-based techniques used with ANNs are only guaranteed to converge asymptotically, if ever, but at least you can get a reasonably good solution in constant time. Whenever do _I_ use ANNs (and in that category I include generalised linear models)? Whenever I need an easy-to-train, fast, function approximator, especially for on-line tasks. I would be more than happy to use some other kind of model with similar properties, really.--Olethros 21:57, 23 December 2005 (UTC)
-
-
-
-
-
[edit] "Neural Networks" is not the popular term anymore?
A bit offtopic, but regarding the use of the term "neural network". That term can sound interesting when you explain to somebody what you work with, but has little value in getting people to actually use it in real-world applications. My company develops neural net based software and services for a very wide range of applications, but we never ever use term "neural nets". It simply sounds too futuristic for people to actually integrate into production systems. I'm personally ok with that as the implicit biological reference is quite misleading. Instead we call them for the customer much more acceptable "adaptive systems". It's vaguer, yes, but it sounds much more like a proven conventional method rather than some sci-fi fantasy. --Denoir 01:41, 7 Jan 2005 (UTC)
- Well, yes, it does seem the trend to rename what it was originally called. Even the IEEE changed the name of their Neural Network society to Computational Intelligence Society. Though, I'm not sure why. --sp00n17 01:47, Jan 8, 2005 (UTC)
-
- The IEEE name change was a recognition of broader interests, not a substitution of a euphemism for the previous name. — B.Bryant 00:52, 27 Jan 2005 (UTC)
-
-
- Side note: IEEE still publishes "IEEE Transactions on Neural Networks". --Spazzm 05:30, 2005 Mar 24 (UTC)
-
- Its true, other terms people use specifically in place for NNs are "connectionist" and "sub-symbolic" methods. I was also wondering why this was happening, I think it may be because "Neural Networks" became overused and cliche at one point in the research and it was no longer advantageous to tie your paper to that specific term, but I don't know, perhaps someone can back this up.
-
- Those alternative terms have been around for quite a while. Meanwhile, the term "neural networks" is still in vogue as well. It is common to use "Simulated Neural Networks" (SNN) or "Artificial Neural Networks" (ANN) to make sure people know you're not talking about real neurons, but we still call our graduate research group "The NN Research Group" (NNRG), even though our professor's own initial research was in the area of subsymbolic representations. — B.Bryant 00:52, 27 Jan 2005 (UTC)
[edit] Self-organizing map(Kohonen)
is there any reason that Self-organizing map / Kohonen NN are not here? i was going to add them myself, but maybe there is a reason? :) - --Cyprus2k1 08:16, 13 Feb 2005 (UTC)
- I came here looking for the same. Go ahead and add it. --Spazzm 05:26, 2005 Mar 24 (UTC)
-
- Okay, I added it myself. Justifications:
- 1. SOM is based on a neural model. I recommend Kohonen's book on the subject, it goes into detail about the neural inspiration for the SOM.
- 2. It contains 'neurons' and performs functions similar to senso-somatory maps in the brain.
- 3. Each SOM 'neuron' has several inputs analogous to dendrites and one exitation based on these inputs.
- --Spazzm 22:55, 2005 Apr 6 (UTC)
- Okay, I added it myself. Justifications:
[edit] The XOR diagram
Surely for a XOR you'd want weightings of -1, +2, -1 rather than +1, -2, +1? The diagram as shown gives 0 on equal states, and -1 on differing states, while if -1, +2, -1 were used in place the output would be 0 on equal states, 1 on differing states - as XOR.
I thought it was more like a NXOR, except that you'd need to absolutely add 1 to the states to get [0,1] from [-1,0], but I'm not so sure on that reasoning since that may just be a case of knowing how much current 0 is. --Firien 13:52, 23 Feb 2005 (UTC)
The diagram also has more nodes than necessary. Xor can be done with 3 nodes: 2 input and 1 output. Node 1 of the input has weights of 1 and a threshhold of 0. Node 2 of the input has weights of -1 and a threshhold of 2. The output node has weights of 1 and a threshhold of -1.
the diagram is good, but the decription is horrible. I didn't understand what it meant until I went to page 2 of this site. I think the definition should be less criptic as the use of threshold and wieght can throw people off. When we are making pages on this system we often get lost in the fog of academia Paskari 19:30, 1 December 2006 (UTC)
[edit] page move
- Hi. This article discusses artificial neural networks (ANN's) and only very briefly mentions neural networks (which are called "biological" neural networks in the article). ANN's are usually called "ANN's", that is "artificial" is added to neural networks (NN) and not just "neural networks" to distinguish them from the "real" ones, which are called "neural networks" if they are discussed in terms of information technology. The term "neural" means "brain-", so "NN" in reference to ANN's is highly misleading and should always be preceeded by the "A". The "A" is only omitted when the context is clear, which is not the case here in a general purpose encyclopedia. Therefore, the space Neural networks should be made free for an article about neural networks (the real ones). That' why I requested a page move. Please vote to give the article the correct name and give other people the possibility to write an article about neural networks, i.e. about information processing in the brain. Ben (talk) 07:06, Apr 7, 2005 (UTC)
-
- oppose – "Neural network" is still common parlance for ANNs (which, BTW, are also known as SNNs – Simulated Neural Networks), and conversely whenever you hear the term "neural network" it is virtually always in reference to an artificial neural network. Also, is there actually an artical to be written about non-artificial neural networks? Other than articles that already exist about the brain and neuroscience and such? (However, it does seem that the clarity of our introductory paragraph has degraded over time. Compare for example the version of 24-Jan-2004.) — B.Bryant 09:30, 7 Apr 2005 (UTC)
-
-
- I understand that many people forget the "A", you are right about that, Bobby, but it doesn't make the usage correct. There are always some words that are complicated, so that some people confuse them. Actually, many people know the meaning of "neural network" (i.e. for biological) and "artificial neuronal network" (i.e. for the simulations). Just see what neuro means.
-
Then, you ask what there is to write about neural networks? Quite a lot, actually. You should know that "artificial neural networks" are an application for neural network theory, i.e. theorizing about neural networks (yes, the ones in the brain, of course!) and their properties. There is, as you predicted, a lot of neuroscience, then cognitive science and philosophy, and information theory. The version you indicated has a much better introduction, but it doesn't change the matter. The article is still about artificial neural networks and not about neural networks. If it consoles you, a new article on neural networks could have a disambiguation sentence "If you search for ANN's..." I just googled in 5 sec. these pages to give you some ideas:
cheers, Ben (talk) 10:59, Apr 7, 2005 (UTC)
- I would also oppose renaming the page. I think the use of neural network to refer specifically to the artificial type is well established. Indeed, the term artificial neural network is arguably a bit haughty, because it deliberately draws comparisons with biological neural networks, i.e. brains. It seems to me that neural networks have much less in common with brains than researchers in the 1980s liked to believe; the difference is far more than simply natural vs. synthetic. AI research has a long way to go before there is any risk of confusion between neural networks and brains. Wmahan. 16:33, 2005 Apr 7 (UTC)
-
- You make the point. The terms Artificial Neural Network, Neural Network Model, Simulated Neural Network (the last one not used very often), etc. all drive on a comparison to neural networks in the brain. That's exactly the point about them and that's how they got their name even if it is seems haughty to you (it IS haughty, btw). I wonder how you could say the term "ANN" is haughty and "NN" is not. You can't say, they ARE neural networks (by calling them "neural networks"), because Neural Networks are IN THE BRAIN. Artificial Neural Networks are just, again, simulations, models, artificial ones. I wonder if somebody who understands that, as you indicated, can oppose the renaming. The naming confusion is a bit like "autobiography" and "biography". I think everybody here knows the difference, still many people don't get it right. Think about it. Finally, there are a lot of people researching neural networks, i.e. information processing in brains, and you can't just take the name from them. You have to ask yourself, where the article about Neural Networks (yes, the brain) is going to be. Ben (talk) 00:16, Apr 8, 2005 (UTC)
-
-
- Finally! Somebody who supports! The term "neural network" is incorrect. Some people claimed "we are building the brain" and called their models "neural networks". This was repeated by some others. However the correct name is "artificial neural network", "neural network model", or more generally "paralelly distributed processes". The term "neural network" is established in theoretical neurosciences for theorizing about the networks of neurons.
-
-
-
- Topics in cognitive science, neuroscience, philosophy, and neuropychology are chronically underrepresented in wikipedia and we should do something against it, at least by giving the correct names to articles as in this case. Ben (talk) 02:37, Apr 8, 2005 (UTC)
-
-
-
-
- Ahem. Support from an anonymous account that has made no other contributions to WP than this vote? I'm skeptical. --Spazzm 10:58, 2005 Apr 9 (UTC)
-
-
- For the discussion:
The German and the Bulgarian wikis, e.g., got it right: See de:Neuronales Netz and bg:Невронна мрежа. The French wikipedia offers a discussion of theoretical neurosiences in the article (they have good research in France), see fr:Réseau de neurones. Ben (talk) 03:28, Apr 8, 2005 (UTC)
- I understand your point that brains are technically types of neural networks. I could always be wrong, but as far as I know, few people called called them neural networks before ANNs came around, and few people today call them neural networks since there's a perfectly good word already: brains.
- Perhaps a crude analogy is computer: technically a brain might be considered a type of computer, simply operating on inputs and producing outputs. (The word originally referred to a human, though not exactly in the sense I mean here.) But when one says computer today, there is very little risk of confusion because it's commonly understood that a computer refers to an artificial machine. Similarly, I'm saying that to almost everyone, a neural network means the artificial type, regardless of the fact that a brain is also a network of neurons. Wmahan. 03:32, 2005 Apr 8 (UTC)
-
- "computer" comes from Latin and refers to something/somebody that calculates. First it was used as "somebody that calcalates", then somebody named a machine, a computer, because it does the same. Now I ask you, is this the same as for "Neural Networks" and "Artificial Neural Networks"? Consider, the comparison of artificial neural networks is based on the "Brain-is-like-a-Computer"-Analogy Ben (talk) 03:43, Apr 8, 2005 (UTC)
-
- This is a difficult question, and Ben does provide valid points. Yes, an implementation of (for example) the Multilayer perceptron is an Artificial Neural Network - not a biological (or 'natural') one. Yes, the term 'neural network' may be confusing. Yes, colloquial usage should not determine encyclopedic classification.
- However, there is one thing that has not yet been considered:
- There is a clear division between implementations (models, simulations) and their underlying theories. The theories governing the design of ANNs are theories used to explain biological neural networks and vice-versa.
- The theory is the same but the applications are different - to claim that the idea is equal to the idea's implementation is akin to claiming that hammers are equal to carpentry.
-
- Furthermore, there is considerable overlap - scientists are continually looking for ways to combine artifical and biological neural networks - what should that discipline be called, if not neural network research?
-
- Last, but not least, biological neural networks is a very nebolous term. Does it refer to the amygdala, or the medulla oblongata? Or the whole brain in general? Discussions on neural networks within these entities would fit better in their respective articles. Unless, of course, one wishes to discuss general ideas of NNs - and those are the same wether they are natural or artificial.
-
- My vote will therefore be Oppose.
-
- That said, the article could certainly benefit from a discussion on the properties of different applications of NN theory, as well as the difference between biological and artificial NNs.
-
- --Spazzm 03:34, 2005 Apr 8 (UTC)
1. Let's get some structure in the discussion. Spazzm provided some good arguments, I am impressed. The hammers and carpentry analogy is good also. Neural networks are in the brain, there is research about them, and there is software, artificial networks. This should not be confused. Therefore, move the article! Just look at this mess:
-
- A neural network is an interconnected group of neurons. The prime examples are biological neural networks, especially the human brain. In modern usage the term most often refers to artificial neural networks (ANN), or neural nets for short, and this is the sense that is used in the rest of this article.
-
- An artificial neural network is a mathematical or computational model for information processing based on a connectionist approach to computation. There is no precise agreed definition amongst researchers as to what a neural network is, but most would agree that it involves a network of relatively simple processing elements, where the global behaviour is determined by the connections between the processing elements and element parameters. The original inspiration for the technique was from examination of bioelectrical networks in the brain formed by neurons and their synapses. In a neural network model, simple nodes (or "neurons", or "units") are connected together to form a network of nodes — hence the term "neural network".
Indeed the theorizing and models of neuronal networks are hard to distinguish sometimes. However, let me try. There is
- research on the brain, on the working of cell assemblies and their emergent properties, as different as the cell assemblies may be. These models serve the purpose to investigate the brain. That is however different from researching all the brain, the scope is different. The brain tissue is made up of cell assemblies that work together in some way that is very hard to understand. The cell assemblies are not the same as the brain however. The brain is a much broader topic. Just see the article. It's just like saying "wheels are cars" (to have another analogy)
- There are applications of the theory (more or less accurate), so called artificial neural networks, that try to serve a concrete purpose, solving a problem, industrial applications, artificial intelligence, ee.g. in pattern recognition, etc. They are not research of neural assemblies but of methods for pattern recognition.
That was about hammers and carpentry. 2. An article about artificial neural networks should discuss considerations about how to implement theory of neural networks and about some implementations. This has some overlap, of course, as you would expect, as they are based on an understanding of neural cells. Currently the article is only about Artificial Networks. That's a shortcoming or, if you like it better, the scope of the article is wrong. It is not Neural Networks, but Artificial Neural Networks. And that's how the article should be called. 3. Finally, I predict, if the decision will be to keep the article here, it would result over time in a radical rewrite (first some stuff merged with Parallel Distributed Processing, then moving artificial networks that try to implement artificial intelligence (in contrast to the models that are models for research of neural assemblies) to a subsections and afterwards, as this sections will be big, to a different article. So, why not do it now and avoid a lot of mess? There are some people, who actually would like to write something about neural networks (IN the brain). There are even lectures about "Neural Networks" (not the artificial ones). You shouldn't block it, by opposing, the topic of this article here is artificial neural networks. Why blocking other topics? Ben (talk) 04:15, Apr 8, 2005 (UTC)
- Perhaps one point needs clarification: Neural Network research is not a one-way process. Neurobiologists do not make discoveries, and then hand them over to the ANN researchers who use them to power robots - this is an oversimplification.
- The process is two-way: Discoveries made in ANN research can be used to explain how the brain works, and so on. Often ANN and brain research is carried out by the same person or group - they are not investigating pattern recognition or human brains (these would be the "hammers" in my above analogy) but how complex connectionist systems behave ("carpentry" according to my, now heavily strained, analogy).
- Once one starts to examine how NN theory can solve a problem like (say) schizophrenia, one moves away from NN research and into psychology, just as research on NNs in inverse kinematics is robotics, not NN research.
- If anyone wants to write an article on Biological Neural Networks, go ahead. Then Neural Network could be a disambiguation page linking to BNNs, ANNs and Neural Network Theory.
- --Spazzm 04:43, 2005 Apr 8 (UTC)
-
- There are even lectures about "Neural Networks" (not the artificial ones).
- If what others do or do not should be the basis of our decision (which I do not think it should be, for obvious reasons), then I am obliged to point out that there are lectures on Neural Networks in the sense of ANNs as well. Not only that, but there are scientific journals and conferences named "Neural Networks..." that deal primarily with ANNs.
- --Spazzm 04:58, 2005 Apr 8 (UTC)
- You are completely right about the lectures named "neural lectures" in the fields of artificial intelligence and in neurosciences/cognitive science, i.e. in two different senses. I was merely trying to invalidate irrational claims there would be nothing to write about neural networks (not the artificial) and only about artificial networks. If you look up in the discussion Wmahan suggested in a very courageous move that neural networks and brains are the same, so everything should be in the brain article. And B.Bryant was asking (citing)
-
- [..] is there actually an artical to be written about non-artificial neural networks? Other than articles that already exist about the brain and neuroscience and such?
I am just trying to give the message that there is something called "neural networks", it is in the brain, and many people do research in this field. Obviously, I thought it was too evident. So, that's first, now second, (now I cite you)
-
- Neural Network research is not a one-way process. Neurobiologists do not make discoveries, and then hand them over to the ANN researchers who use them to power robots - this is an oversimplification.
You don't have to explain that to me. I never said anything like that. I was pointing out that there are ann's who are developed in ai and neural network models (or ann's) who test theories in neurosciences/cognitive science. So, we completely agree on this one here, probably there was a misunderstanding. Third. You suggest having a disambiguation page here, linking to "BNNs, ANNs and Neural Network Theory". This would mean a compromise but actually means also moving the article. Are you now supporting the page move? Ben (talk) 10:05, Apr 8, 2005 (UTC)
- this was user:217.95.54.26. Ben (talk) 14:38, Apr 8, 2005 (UTC)
- An anonymous IP user who has made no other contribution than the above vote? Does this even count? --Spazzm 06:37, 2005 Apr 9 (UTC)
I might support a move if some actual effort was being put into writing articles about Biological Neural Networks and Neural Network Theory, but there isn't. I'm not going to write it since I'm only competent enough to write about a small subset of NN theory, which I feel has been covered adequately here. --Spazzm 11:11, 2005 Apr 8 (UTC)
- Well, I think there are many people, who actually would like to write something about neural networks (not the artificial ones). So, don't worry, there will be an article about (biological) neural networks, but first we have to move this article here. Ben (talk) 14:36, Apr 8, 2005 (UTC)
-
- How does the current name "block" work on biological neural networks? That link even appears at the top of the article. If you can show that the term has an established use in the non-artificial sense, I will support the rename. By established, I mean used not just in the context of introducing early ANNs (like the first link you provided), and supported by a more reputable source than your second link (which appears to be a blog entry). Wmahan. 16:25, 2005 Apr 8 (UTC)
-
-
- I said the links were what I found in 5 secs, I didn't say they were good. They were supposed to show there was topic "neural networks" other than ann's and I think they did. You want more reputable sources? What about nearly any book in neuroscience? E.g. Neuroscience. Purves, D., et al. Chapter 13 Box D (p 332-3). Then,
-
- Pinker wrote a book, "how the mind works".
- then at MIT OpenCourseWare I found lecture notes
- a research paper about Interactions between Depression and Facilitation within Neural Networks
- I found also this paper
cheers, Ben (talk) 17:13, Apr 8, 2005 (UTC)
I think a reason that people haven't already written about neural networks is that there is already an article by that name and they get confused, of course, there are much more computer enthusiasts at wikipedia than there are neuroscientists, but they will come when they see they have a place here. Ben (talk) 17:17, Apr 8, 2005 (UTC)
- Support, the article is indeed primarily about the artificial flavor of neural networks. Cburnett 14:10, Apr 8, 2005 (UTC)
- I'll vote concur here. Artificial neural network is the correct term and should be used for the article title. Neural network can be a disambiguation page. That said, however, the word artificial is often simply left off, especially when it is clear from context. Not the case here, though. --Smithfarm 19:33, 8 Apr 2005 (UTC)
- A title, in and of itself, has no context...if anything, it is the context. So there's no problem calling it a NN inside the text, but this article is insuficiently general to cover all neural nets. Cburnett 23:51, Apr 8, 2005 (UTC)
I think the reason no-one has written an article about Biological Neural Networks is that the topic is covered extensively elsewhere:
- Brain (note that there's a separate article on the Human brain)
- Brainstem
- Central nervous system
- medulla oblongata
- cerebellum
- cerebral cortex
- Somatosensory system
- hippocampus
- Inferior colliculus
et cetera, ad nauseam. There's far more articles on brains than there are on computational intelligence, so saying that the latter is overrepresented at the expense of the former is incorrect. The 'Wikipedia is written by computer geeks' idea is a myth, it may have been true once, but it is certainly not true in this case.
Also, it was mentioned above that Neural Networks have been used in the sense of Biological Neural Networks. This is, of course, correct, but there are far more numerous and reputable examples of using Neural Network in the sense of Artificial Neural network or Neural Network Theory:
- Neural Networks, the Official Journal of the International Neural Network Society, European Neural Network Society & Japanese Neural Network Society
- The Neural Networks Research Group at the University of Texas
- SNN in the Netherlands
- Amazon.com search for Neural network - the wast majority is on ANNs.
and so on... These are worldwide examples, taken from only the most highly respected publications and institutions - more than a few lecture notes or random articles.
If that's not enough, there exist prior encyclopedic usage of Neural network in the sense of Artificial Neural Network or Neural Network Theory:
- Dictionary.com
- American Heritage Dictionary
- Columbia encyclopedia
- LookWAYup
- Encyclopedia.com
- Drug Discovery & Development
- Britannica Online
Moving the current page to Artificial Neural Network and turning Neural Network into an article on Biological Neural Networks would fly in the face of all reason.
There's no page on Biological Neural Networks or Neural Network Theory, so turning Neural Network into a disambiguation page now would be pointless - there is only one page to point to. Those who support a move would perhaps be better served by first writing these articles, then requesting the reorganization.
--Spazzm 06:37, 2005 Apr 9 (UTC)
- AHAHA, you expect the IEEE or computer science deparments or MATLAB to write or focus on the non-artificial neural nets? Come one, your references/sources are heavily biased toward the artificial kind. Cburnett 07:15, Apr 9, 2005 (UTC)
-
- You're right, I removed the links that might be considered biased (I even removed the link to the Helsinki University of Technology.) There's still plenty of evidence that NN is most commonly used in reference to ANN, so that point and my other points still stand.
- If you disagree with my inclusion of what the CS department of University of Texas has to say, you may notice that no medical department of any university that I'm aware of has a neural network research group.
- Furthermore, this is only nitpicking unless you intend to claim that Encyclopedia Britannica, Amazon.com and Dictionary.com are somehow biased as well.
- --Spazzm 07:21, 2005 Apr 9 (UTC)
-
-
- Anyone who needs further convincing of what the common usage is, please feel free to do a Google search for 'neural network'. The first 100 hits (I stopped looking after that) concerns themselves with ANN or NN theory - not BNN.
- --Spazzm 08:12, 2005 Apr 9 (UTC)
-
-
-
- Well:
- U of Texas has no medical department
- SNN of Netherlands is tied to a com sci department
- Look at the defintion of ANN at dictionary.com [1] and it's much, much longer; not to mention "neural network" says "real or virtual"
- Bartleby has the exact word-for-word definition of the much smaller definition of "neural network" at
- Drug Discovery & Development lists both ANN & NN and the ANN definition is longer
- Well:
-
-
-
- Furthermore, all of the neuroscience links you list above (save the first two) are specific sections of the brain.
-
-
-
- So, while NN may commonly be used to mean the artificial kind, ANNs are modeled after *real* neural networks. The article in question even says "The prime examples are biological neural networks, especially the human brain." Only then does ANNs arise. The remainder of the articles are not about the prime example — biological neural networks — but the artificial neural networks. The article title "Neural network" falsely details what the name presents — neural networks — because the remainder discusses one example of a neural network: the ANN. Cburnett 08:18, Apr 9, 2005 (UTC)
-
-
-
-
- Is there any medical department with a neural network research group, anywhere?
- Some encyclopedias have articles on both NN and ANN. But when you look up 'Neural network' in them, you come to an article on ANN called Neural network. I think that's the central point.
- But hey, if anybody writes an article on biological neural networks or neural network theory I'll gladly support a disambiguation page. Unfortunately, right now there's more quibbling about the organization of WP than contribution to the content of WP.
- --Spazzm 08:28, 2005 Apr 9 (UTC)
-
-
-
-
-
- Quoting Cburnett:
- :::*Look at the defintion of ANN at dictionary.com [2] and it's much, much longer; not to mention "neural network" says "real or virtual"
- No, the definition says "A real or virtual device, modeled after the human brain, in which several interconnected elements process information simultaneously, adapting and learning from past patterns.
- A real device in this case would be an electronic circuit, as apposed to a virtual device (computer program).
- Please refrain from taking things out of context.
- --Spazzm 08:39, 2005 Apr 9 (UTC)
- Quoting Cburnett:
-
-
-
- Another anonymous user with no other contributions than this one vote? This is the third one - I'm skeptical.
- Also, anyone looking for information on the brain or psychology would most likely go to the pages on brain and psychology, not an article on neural networks.
- If you don't like the intro, please rewrite it - don't just say it's 'weird'.
- --Spazzm 10:12, 2005 Apr 9 (UTC)
-
- User:Spazzm, you say, you support a page move, if there would be an article. This means obviously that you recognize there is such a topic such as neural networks in neurosciences. But you impose a condition, first article, then move? That doesn't make sense to me.
As for the anonymous users, they might not be counted if there would be a close decision. As it looks now, it won't be close, so don't worry. It would still be nice to have a consensus on the matter.
Ben (talk) 01:17, Apr 11, 2005 (UTC)
- Quoting Ben: User:Spazzm, you say, you support a page move, if there would be an article.
- No, I would support a disambiguation page, if there were any pages to disambiguate between - e.g. biological neural networks and ANNs. But there isn't, so there is not point in having a disambiguation page. If anyone took the time to write an article on BNNs, however, the case would be entirely different.
- The WP policy cautions against writing disambiguation pages for non-existant articles, with good reason. Right now the discussion is simply a waste of effort that would have been more productively spent writing pages on BNNs and Neural Network theory. --Spazzm 04:09, 2005 Apr 11 (UTC)
-
- The point of the discussion about the move, that the article takes a name another topic would deserve. I saw you rewrote the introduction, but that's not enough. It still says
"A neural network is an interconnected group of neurons. It is usual to differentiate between two major groups of neural networks...Biological neural networks [...] and Artificial neural networks [...]"
Just follow the link to see what a neuron is! If this isn't an argument for a move than what?
-
- About the WP policy for disambiguation pages (quote)
Adding links to non-existent articles should be done with care. There is no need for you to search for all occurrences of the page title and link to articles that are unlikely ever to be written, or if they are, likely to be removed. For example, quite a few names will show up as song titles, but with few exceptions, we usually do not write articles about individual songs, so there is no point in linking to them. If you must add this type of information, be sure to link to at least one existing article (band, album, etc.).
Summarizing, the WP policy cautions for these cases:
- "...if they are unlikely to be written": not the case as was argued above
- "...likely to be removed": nop
- "...song title": no song title ;)
Where was the problem again? Ben (talk) 04:26, Apr 11, 2005 (UTC)
-
-
- An article on Biological Neural Networks is unlikely to ever be written, because:
- The subject is covered extensively elsewhere, see my above list.
- It's not a name in common usage, see previous point and above list.
- No one has written it yet, despite all the heat and noise generated in this very debate.
- An article on Biological Neural Networks is unlikely to ever be written, because:
-
-
-
- A disambiguation page is likely to be removed in short order because:
- Since it's not resolving any disambiguity, it is merely a source of annoyance.
- A disambiguation page is likely to be removed in short order because:
-
-
-
- Of course, I'd gladly change my position on this if, for example, Ben wrote a good article on Biological Neural Networks.
- Until then, no.
-
-
-
- Quoting Ben:The point of the discussion about the move, that the article takes a name another topic would deserve.
- Incorrect. My above examination of google searches, amazoon.com searches and other leading encyclopedias shows, overwhelmingly, that in common parlance neural network means artificial neural network.
- --Spazzm 06:21, 2005 Apr 11 (UTC)
-
- Cburnett and I pointed out repeatedly before, the topic is not covered yet, just see the explanations and links above. The same holds for your second point: it IS a name in common usage (see arguments above). I would suggest you just look at the definition offered in the article, i.e. neural network as interconnected group of neurons. No one has written it yet, correct, one reason being it is discriminated against, e.g. with current naming policy and another systemic bias in wikipedia. Since there are, as argued, at least 2 usages of the word, an article about neural networks at this place, OR a disambiguation page would NOT be removed. This is an ongoing discussion about the move of this page here and it is to be decided where an article about (biological) neural networks is going to be. I want to stress that I don't see myself instrumental in setting up the new article, though, I would be glad to start and contribute together with others to a new article about neural networks from tomorrow on if the majority will be in favor of it. Ben (talk) 07:51, Apr 11, 2005 (UTC)
- You just changed the wording from "in common parlance" to "overwhelmingly, that in common parlance". I think this was being discussed above already. See the dictionary examples provided above that have two usages of the word, one neural neural and one artificial neural. If you cite google and amazon you show only one thing: On the internet there is more about software and programming than there is about neurosciences. That is NOT common parlance. Ben (talk) 07:55, Apr 11, 2005 (UTC)
-
- The only examples offered in this discussion of neural network in the sense of biological neural network is a few lecture notes and a couple of books. Compare this to almost the entire inventory of Amazon.com, the Google hits, most other Encyclopedias and countless reasearch groups. Granted, there's a lot of software developers on the 'net, but are you claiming that the software developers and programmers have somehow hijacked Encyclpedia Britannica (it's published on paper, you know) and the universities as well?
-
- The notion that biological sciences are discriminated against is ludicrous - just look at the enormous amount written on the brain,human brain,nervous system,central nervous system,cerebellum and so on. ::There's far more articles on biological brains than artificial ones.
-
- Nevertheless, I look forward to seeing the new page you intend to write on Biological neural networks.
- --Spazzm 08:07, 2005 Apr 11 (UTC)
-
- As for your sarcastic remark: you seem not to be reading other comments. Then, as you refer to Britannica, it says there:
"neural network is a computer program that operates in a manner analogous to the natural neural network in the brain"
The bold formatting is taken from the [article you refered to.
Obviously, Britannica can't validate your claims, rather the opposite. Ben (talk) 08:21, Apr 11, 2005 (UTC)
-
- Looks to me like Britannica differentiates between neural networks (which they define as a computer program) and natural neural networks. Many encyclopedias (including Britannica) have the lookup term in bold troughout the article.
- How is this an invalidation of my claims?
- --Spazzm 08:27, 2005 Apr 11 (UTC)
- You stated before: (citing)
"Looks to me like Britannica differentiates between neural networks (defined as computer programs) and natural neural networks." Look, it doesn't say "natural neural networks", it says "natural neural networks", offering two definitions. Let's be exact.
Oh, and yeah, dude. There are soo many articles on "the brain". It just kills me.
What about all the articles about windows, linux, software, etc.? Don't start telling me about neurosciences being overrepresented, it's ridiculous. Ben (talk) 08:32, Apr 11, 2005 (UTC)
-
- The term neural network is in bold in Britannica because it's the term the article concerns itself with. Whenever the title of an article is repeated in the text, it is bolded. See for example the article on heart.
- I'm glad I could clear up this misunderstanding.
- --Spazzm 08:37, 2005 Apr 11 (UTC)
- I see you changed the definition in the introduction of the article from "group of interconnected neurons" to "artificial neurons".
On artificial neurons it says:
Artificial neurons (also called "node") is the basic unit of an artificial neural network, simulating a biological neuron.
Just compare the naming conventions: You don't want the article "neural networks" moved to "artificial neural networks", and you ignore the naming conventions in articles neurons and artificial neurons.
I am sure my argument about the term neural network being bold in two meanings can't be misunderstood if not intentionally.
i am tired of having to face the same arguments, we already discussed above, over and over again without anything new coming up. Maybe we should take a time-out here, as we are getting more and more sarcastic? I have other things to do as well. Ben (talk) 08:50, Apr 11, 2005 (UTC)
- I'd gladly move this article myself, if there was any reason to do so. But the reality is that neural network means artificial neural network. For example: I can't find one medical department on any university that has a 'neural network research group'. Computer science departments, on the other hand...
- Right now, there's no article on Biological neural networks, and no-one seems the slightest bit interested in starting one. Therefore, moving this page and creating a disambiguation page would just create a needless hassle.
- Moving this page and turning Neural network into a page on Biological Neural networks would amount to vandalism.
- And I'm sure nobody wants that.
- I agree to your time-out, however.
- --Spazzm 08:58, 2005 Apr 11 (UTC)
- I summarize, we agree to diagree. And as there are no new arguments I don't see myself obliged to repeat. Just one hint: look at the votes to see what people want and what they don't want. I am sure nobody would call this vandalism here, except for you.
Ben (talk) 02:50, Apr 12, 2005 (UTC)
-
- I see one vote for moving, 3 votes for a disambiguation page, 2 votes against, and 3 anonymous votes - all for.
- Hardly an unanimous decision or consensus by any meaning of the word.
- Since you're the one who called the vote, you should have taken responsibility and removed the anonymous votes - there's no way of knowing if it's an attempt at ballot-stuffing or not.
- --Spazzm 03:38, 2005 Apr 12 (UTC)
-
-
- I will certainly remove the anonymous votes after seeing the policy for that (after lunch). I am not too much acqainted with voting procedures here at wikipedia, I have to admit. Then there is my own vote.
-
BTW, see my first attempt at creating an article on neural networks. Very premature, needs a lot of editing, maybe you help?
Ben (talk) 04:29, Apr 12, 2005 (UTC)
-
-
- Very good work. But it should be named Biological Neural Networks, to avoid confusion, and half the article (from Hebb and onwards) would fit better under ANNs.
- My vote tally above included your vote - there's exactly one (out of 6 non-anoymous ones) for a move. --Spazzm 05:04, 2005 Apr 12 (UTC)
-
-
-
- Happy you liked my first attempts at creating the article. It is only the very beginning.
-
You have some strange ways of counting surely. How about using edit->search({"support","oppose", "concur")? Isn't that how votes are counted? I see 3 times support (by registered users, not including myself), 2 times oppose (you and B.Bryant), 1 time "concur". I don't know how YOU counted the votes, please explain. How about finding a way to count the votes according to the policy in wikipedia? Ben (talk) 05:52, Apr 12, 2005 (UTC)
- The 3 support/concur votes you mention came after the debate switched to having a redirect page. --Spazzm 06:41, 2005 Apr 12 (UTC)
- I am personally inclined towards having a disambiguation page, however I see no such "switch". I understand it that "concur" means disambiguation and "support" means moving without specification of whether having a disambiguation page or not. What about your opposition, btw.? You mentioned something in the vein of "if there were an article..."
Ben (talk) 07:21, Apr 12, 2005 (UTC)
I am citing you: I might support a move if some actual effort was being put into writing articles about Biological Neural Networks and Neural Network Theory, but there isn't. I'm not going to write it since I'm only competent enough to write about a small subset of NN theory, which I feel has been covered adequately here. --Spazzm 11:11, 2005 Apr 8 (UTC)
- After counting (omitting anonymous votes) I found the following result:
- support 2 (Wmahan, Cburnett)
- concur 1 (Smithfarm)
- oppose 2 (Spazzm, Bryant)
Interpreting these results:
- Wmahan supports renaming(=moving) the page, without mentioning disambiguation.
- If I understand Cburnett correctly, a new article should be "general [enough] to cover all neural nets"
- Smithfarm: disambiguation
- Spazzm: oppose disambiguation and moving
- Bryant: oppose disambiguation and moving
Is this a rough consensus for moving the page?
This vote is not about whether having a disambiguation page or a new article on this place. It is about whether this article represents all the topics that are consituted by "neural networks" or whether it is general enough as an introduction to the topic as a whole (meaning both biological and artificial neuron). I say no. Let's hear what you say, Spazzm. The time is up anyway as far as I can see. I go home now and come back tomorrow and then let's face the decision together. Please check the article I edited meanwhile for some changes.
Ben (talk) 08:12, Apr 12, 2005 (UTC)
- This is a misrepresentation. You count Wmahan as supporter for a move, yet he/she only voted support/concur after the topic of discussion had switched to disambiguation. Furthermore, I do not oppose a disambiguation page, provided there are any pages to disambiguate between, but there aren't.
- --Spazzm 23:24, 2005 Apr 12 (UTC)
A bit of a late entry, but make a search on google for neural network and you'll see that there are basically all references to the artificial kind. While "neural network" can in theory mean wetware, it's not used that way. My vote would be (if it is not too late), oppose--Denoir 01:59, 5 May 2005 (UTC)
[edit] Decision
This article has been renamed as the result of a move request. I supported the move and that, I think, makes it strong enough a majority (4 to 2). I do, however, think that the neural network article should not be a disambiguation page - I reckon it should be a more general overview of the topic and have tried to reflect that in the way I've done the move. I will leave it up to you lot to decide on the way forward from there, though. violet/riga (t) 20:19, 12 Apr 2005 (UTC)
- There's no consensus for a move - most of the 'support' votes (all except one) are for a disambiguation page - and there's been flagrant attempts at vote rigging. There is no page on biological neural networks, so there is no need for a disambiguation page. Common usage is overwhelmingly (see encyclopedia britannica, dictionary.com, amazon.com, google etc. etc.) in favour of 'neural network' = 'artificial neural network'. Finally, the move is botched, since Artificial neural networks still redirects to neural network - please correct this.
- --Spazzm 23:07, 2005 Apr 12 (UTC)
- Perhaps a few more votes should have been collected. Six votes is not a very impressive number, statistically speaking. --Denoir 01:59, 5 May 2005 (UTC)
-
- Changing a redirect is easy as pie to change. Oh, and it's changed now. Cburnett 23:17, Apr 12, 2005 (UTC)
-
- Please see the new article at biological neural networks. Let's move on improving this and the other two articles. I think everybody will see that this decision was fair and in the best interest of everybody here, whether they were opposing, supporting, or concurring (or abstaining)Ben (talk) 03:08, Apr 13, 2005 (UTC)
[edit] Too technical intro
The first paragraph, "An artificial neural network (ANN), also called a simulated neural network (SNN) or just a neural network (NN), is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation." is while technically correct pretty much useless from the point of view of understanding what NN's are. Those familiar with the topic know that already, and those who are not won't understand what it means.
My suggestion is that we keep the text, but that we before add a short text explaining NN's in more general terms. A good introduction text can be found here. I think we need something similar to that:
"A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. "
At least we need something saying that neural nets are parametric models that adapt their parameters based on data presented to them. I.e something that explains their functionality rather than their structure. --Denoir 02:15, 5 May 2005 (UTC)
[edit] Request for expert opinion on Continuum calculator
Continuum calculator claims to be an alternative for the Artificial neural network. It features similar properties, but the structure differs significantly.
Currently there's Vote for Delete on this article. Could someone take look and give professional opinion on this article validity on the voting page? Thanks. Pavel Vozenilek 17:41, 21 May 2005 (UTC)
[edit] Complex XOR function
Why do we need to have a complex network with two hidden layers for the XOR function, when there are simpler networks which do the trick with just one hidden layer?
0 0 /|\ / \ / 0 \ 0 0 / /\ \ |\ /| / / \ \ |/ \| 0 0 0 0
...please excuse the line drawings — I was never much good at ascii art.
Anyway, both the functions above can compute XOR easily (I'll leave it to the reader to fill in the weights]). While the first has less units, the second is arguably simpler because it doesn't require connections that span over a layer.
If no one objects, I can make a pretty version of the second diagram in a few days (...after I've finished my dissertation...). — Asbestos | Talk (RFC) 21:15, 18 August 2005 (UTC)
- Could you please explain how to interpret the network on the left? Where do the edges from the middle node terminate? --Flatline 20:46, 19 October 2005 (UTC)
[edit] Neural Networks, Artificial Neural Networks, Machine Learning, Statistics, Optimisation
Hello everyone. The neural networks article has now been updated and is in a much better state, though still lacking in some respects. But it is quite readable at least. Now, there are two things that I noticed:
- The neural networks article mentions a lot of the basics, plus a lot of other things that should be further expanded upon either here, or in Machine Learning (which also seems to be an undedeveloped article).
- This article needs some restructuring, as important concepts are mentioned at seemingly random locations.
Perhaps, as is done in neural networks, a coherent background discussing the relationship between optimation, statistical estimation and neural networks should be introduced before the list of types of neural networks. Then it would be easy to discuss each type of neural network according to established concepts.
OK, now a few tidbits that I noticed:
- Overtraining is discussed, but only in the context of MLPs. Since every model can suffer from overtraining, this should be moved somewhere else, together with related concepts such as online learning, sampling from distributions etc.
- RBFs are a paticular type of generalised linear model (I think the acronym is GLIM but dont take my word for that) - and so is the alpha-perceptron. GLIMs are linear models, who pre-process their input through some fixed function. The advantage is that there is a global minimum. The disadvantage is that it is not clear how to choose the pre-processing function (but actually, a nonlinear high-dimensional projection works just fine, as in SVMs, which are mentioned there as well). I think that if GLIMs were explained at the beginning of the article, it'd be easier and more natural to explain RBFs and then backpropagation. Perhaps basic gradient optimisation methods could be talked about at the beginning of the article.
- The Models section appears to have many erroneous statements.
- The Calculations section is not very clear, but essentially correct. Perhaps this can be motivated more with a proper background section.
- The Advantages section is peculiar. Advantages compared to what? Furthermore, no disadvantages (again, compared to something else) are listed.
- In Applications there are some inaccuracies.
So, if I were to re-write this article from scratch, I would do:
1) Introduction, with a link to Neural Networks (which seems to be lacking) for further discussion of the relation to biological systems, and the use of models of neural networks in neuroscience.
2) Models: talk about different non-exclusive categories, maybe without much mention of biological neurons. There is not much need to talk about specific functions here - it may only confuse the reader.
3) Learning: Cost functions and how to minimise them. Give a basic example of linear model using stochastic steepest gradient descent.
4) Types: Start with the Perceptron, and how it's related to the example in 3. Talk about linear separability. Talk about how projecting to another space can make a problem linearly separable. Introduce alpha-perceptron. Introduce the MLP, and the use of the chain rule for minimising with respect to the 'hidden' parameters. Introduce the RBF network as another example. After all this, the reader should be able to easily tackle other networks. Talking about the recurrent networks and their inherent learning stability problems should be easy after the chain rule discussion. Try to avoid talking much about not very commonly used networks, though this is POV.
So, I don't know, does anyone have another plan? Would you prefer to leave it as is? --Olethros 21:37, 23 December 2005 (UTC)
[edit] Neural_Network::Neural Networks and Artificial Intelligence move?
I was wondering whether people thought that moving the section "Neural Networks and Artificial Intelligence" from Neural Network to Artificial Neural Network as a "Background" section would be preferrable, or whether a new article called "Neural Network Learning: Theoretical Background" should be created instead. --Olethros 16:08, 26 December 2005 (UTC)
Started bring stuff over. I think this introduction is OK as it stands now. I added a few more things in the first part of the Background section, where it makes clear why artificial neural networks are called 'networks' and not 'functions' or 'thingies' or whatever. The relation with graphical models is made as clear as possible. My aim here was three-fold. a) correctness b) generality c) links to other fields - When I am talking about a specific model I am trying to talk about it in the form of an example. I think that the later section on types of artificial neural networks is more suitable as a place in which to put lots of information about ANN types. I think that this is satisfactory as an introduction, but I would particularly enjoy comments from people that are not experts. --Olethros 22:47, 30 December 2005 (UTC)
- Although there has been a clear improvement in the quality of the article, the problem now is that it is way too long and that several sections are redundant. I'm going to think it over a bit more, but I think that we should move several parts to new articles. We also need to structure it more in a way that a layman can understand it. As it is now, the first thing you see is a sea of equations - guaranteed to chase most people away. Don't take me wrong - the formal part should by no means be removed. We just need to structure the whole article in a way that is accessible for people who want to get a general understanding what neural nets are - without diving into the math. --Denoir 22:05, 30 January 2006 (UTC)
[edit] Neural Network Software
I'm thinking about creating an article about neural network simulation software. There are quite a few different types of software ranging from pure data mining tools to biological simulators, and I think it would be interesting with an overview. The first step would be to categorize them into subtypes and provide a relatively abstract summary. The secon step, a bit more time demanding would be to describe actual software. Somthing similar can be found for Word processors and other types of software. The overall aim of it would be to provide a more practical view on neural networks. Any suggestions, objections etc are most welcome.--Denoir 12:43, 12 January 2006 (UTC)
[edit] Reinforcement Learning and Backprop
The actual text under the section is correct, stating that "ANNs are frequently used in reinforcement learning as part of the overall algorithm.". The "Learning paradigms" intro however is not. As far as the neural network goes, its use in RL is plain supervised learning with (for example for On-Policy TDL) the input being state, action and the desired output being expected reward.
- I am not sure I understand this complaint. In any case, there are neural network architectures that implement gradient-based reinforcement learning and which do not fall under the plain supervised learning paradigm. (Bartlett and Baxter published some articles detailing such a method around 5 years ago).--Olethros 21:11, 1 February 2006 (UTC)
The request to include RL came from the Nature peer review of Wikipedia.
Incidentally, the next issue, backpropagation is also mentioned there. In the review backpropagation is refered to as a "learning algorithm" and in the article we have "When one tries to minimise this cost using gradient descent for the class of neural networks called Multi-Layer Perceptrons, one obtains the well-known backpropagation algorithm for training neural networks."
Backpropagation is not a learning algorithm per se and it is certainly not tied to gradient descent. What is being propagated depends on both the cost function and the receiving element. And how that information is used to update the system is up to the local learning algorithm. Instead of gradient descent, the propagated error can for instance be used with local GA to optimize weights. Not to mention that there are many learning algorithms that use the local error gradient that are not gradient descent. --Denoir 00:07, 1 February 2006 (UTC)
- Of course. You want to do gradient descent on the parameters, so you need find the gradient of the cost with respect to the parameters. You do that by decomposing the gradient function using the differentiation chain rule. Then, when you use steepest gradient descent to update the parameters you obtain exactly the backpropagation rule described in Rumelheart's paper. It is just that the application of the chain rule and the use of gradient descent amounts to a 'backpropagation of errors' algorithmically. The exact quantities that are 'backpropagated' depend on the cost function and the actual functions of course. I have no idea what you mean by 'local GA to optimise weights'. And yes, there are a lot of other learning rules (a lot of which are ad-hoc) but this is about the backpropagation algorithm, which is exactly the same as stochastic steepest gradient descent.--Olethros 21:11, 1 February 2006 (UTC)
-
- Well, not exactly. The difference is between the (backwards) propagation and the local learning rule. If we take a look at the simple case of an interconnection matrix followed by an activation function we have:
-
- forward equation:
- propagation equation:
-
- The important thing here is to observe the difference between the propagated error and the local error.
- In the case of gradient descent we have the learning rule: Δwij(n) = αδi(n)xj(n), but the important thing is that the choice of gradient descent is not at necessary. At the point when you locally get the propagated error, you can use that information with basically any optimization algorithm. It doesn't even need to be gradient based.
-
- Bottom line, what I am trying to say is that backpropagation is the data flow, not including the local updates. And what kind of optimization (i.e learning) rule you choose is more or less arbitrary. You can for instance use a genetic algorithm to locally minimize the received propagated ei error and it's still backpropagation.
-
- The only reason why the data flow and the local learning rule get treated as parts in a single algorithm is because of the normal primitive software implementation of ANNs. However, when they are treated in a more general way (as done by more solid software as for instance Synapse or NeuroSolutions), that fully decouple data flow and optimization, that error can be avoided. --Denoir 02:03, 2 February 2006 (UTC)
-
-
- OK, we are just using the term differently. I thought that historically 'backpropagation' speficially referred to the complete algorithm described in Rumelheart's paper. Looking at the backpropagation article on wikipedia, apparently both uses are common. In any case, I had added the reference to backpropagation as a simple example of the application of a particular optimisation method, cost function and model. Maybe I should elaborate that this refers to the backpropagation algorithm described in Rumelheart's paper. People in my community seem try to avoid using the term backpropagation because it harks back to an era where ANNs were thought of as mysterious magic thingies - at least that's the impression I get. I think the 'backwards flow' is usually referred to as a part of 'message passing' in the more general Graphical Models framework.--Olethros 14:39, 2 February 2006 (UTC)
-
-
-
-
- Yep, and that's why I didn't change it - because the historical use of it. And I personally, try to (albeit unsuccessfully) avoid the term neural network altogether to avoid the whole AI-hype thing ;) By the way, what I know as "Component-Based Neural Networks" or "Component-Based Adaptive Systems" or "Object-Oriented Neural Networks", you seem to call "Graphical Models". I was thinking about writing an article about those, and I haven't heard them called that name. So any links or articles you might know of on the subject would be helpful, so that I can get a more complete picture. --Denoir 16:54, 2 February 2006 (UTC)
-
-
-
-
-
-
- OK, that's a tough one. The field is huge. Graphical models are used in statistics and related fields (mostly physics and machine learning I guess). Basically they are models which model dependencies between variables via some kind of potential function, thus forming a graph. Now, if you want to use statistical inference on these types of models you end up using some kind of message passing usually. What these messages are really depends on the formalism used. In a Bayesian inference framework it is common to use belief propagation for message passing, coupled with the junction tree algorithm for converting general graphs to trees... but there are other approaches. I think a reliable reference is the compiled volume: Learning in Graphical Models, edited by Michael Jordan. His homepage also has some interesting stuff. Also, McKay's book Information Theory, Inference, and Learning Algorithms, which is free to view online, ventures a bit inot graphical models. I never heard of "Component-based" and "Object-Oriented" models before, curiously. --Olethros 14:47, 3 February 2006 (UTC)
-
-
-
-
-
-
-
-
-
- Thanks for the reference. I think however we are talking about two different things. Or better to say, what I know as component based neural networks are a specific subset of the general graphical models you are describing. I know the graphical models from statistics, specifically belief propagation, but I have not seen them applied to traditional neural networks and gradient based learning. As for nomenclature, component based neural networks are in my experience generally oriented towards practical applications so the naming conventions are tied to specific software. I think NeuroSolutions uses "object-oriented neural networks" while Peltarion's Synapse uses "component-based neural networks". If I recall correctly JOONE uses both terms. On the Microsoft PDC last November in LA, the people from Microsoft research used "component-based neural networks" as an example of advanced usage of the .net platform. In addition Jose Principe (University of Florida professor and the brains behind neurosolutions) has published a number of articles on the subject which use the latter term as well. --Denoir 04:00, 16 February 2006 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
- OK, I see. Finally, you could describe a neural network as a graphical model and you could proceed to do Bayesian inference on it by defining appropriate priors for the parameters and so on. By performing fully Bayesian inference you'd end up with a posterior distribution of networks given the data. (i.e. you'd have a new joint distribution for the parameters, or for the parameters and the network architecture if you are feeling ambitious). For the full Bayesian inference usually variational methods are used. If you just want a point estimate of the parameters with maximum posterior probability many algorithms can be applied straightforwardly, including gradient methods. --Olethros 08:46, 29 March 2006 (UTC)
-
-
-
-
-
-
[edit] Natural language processing
I seem to remember reading somewhere (Pinker?, The Emperor's New Mind?) that one thing neural networks are bad at is the type of symbol manipulation believed to be needed for natural language processing. Is this true or false? Is it covered by this or some other wikipedia article? where can I read about it? — Hippietrail 21:43, 7 April 2006 (UTC)
-
- Generally speaking, neural nets are not capable of processing symbolic information. Having said that, there are some symbolic->numeric mappings that can be made (for instance bayesian confidence propagation neural nets (BCPNN) are capable of mimicking first order expert systems). Through certain transformation some elementary symbolic information can be processed. This is however very limited.
[edit] How interconnected?
Maybe it's buried in the article somewhere but I can't find it by skimming: Is every node connected to every other node on the next layer? — Hippietrail 18:18, 9 April 2006 (UTC)
- No, At least in a abstract schematic, they are not connected. But for simulation and computer implementation as we need a general form, we assume them connected but put a zero weight for those that must be disconnected. Whether two cells are connected or not (with zero weight) may be decided by designer or be adjusted at run time during learning process. --Neshatian 06:39, 10 April 2006 (UTC)
[edit] So what actually happens at each node?
I have basic understanding of scientific concepts and know computer programming, but not advanced math. The formulae and topic-specific jargon are difficult for me. I can't seem to find what actually happens at each node? I can see that the connections are strengthened or weakened, but what passes along them? Numbers? How do these numbers change at each node? Am I totally off the mark? — Hippietrail 18:57, 9 April 2006 (UTC)
- It seems that the article hasn't been comprehensive enough, or at least is not in the way of an encyclopaedia article. This is because sometimes we forget that here is a kind of encyclopaedia and its audience are typical users with some general knowledge of the matter. Anyway …
- You guessed right. What passes between cell (along connections) are numbers in a digital system or a computer simulation. It may be a single number or a group of numbers forming a vector. As you mentioned they are strengthened or weakened by the mean of some weight parameter. After this weightening, they enter to the cells. Cells are usually simple functions. For example a sigmoid function which is in the shape of a ‘’S’’. Just imagine the input signal at x axis and the output signal at y axis. That’s all. But how it can accomplish such complicated tasks? Actually cells don’t know anything about their network and their functionality is very basic. Consider a summation operator in a complex math formula. Its task is similar to what cells do in a neural network (both real and artificial). Of course a group of cells in a network can accomplish more complex tasks.
- Please note that such network without learning has no meaning. It is the learning process that adjusts the parameters of these functions and causes the whole network to do what we want. --Neshatian 06:31, 10 April 2006 (UTC)
[edit] Let's see if I understand this
I think I'm slowly getting there. Please let me know if I'm on the right track:
- In a natural neural net each neurode is independent and hence asynchronous. A signal can come from any dendrite at any time. A signal has no value, either one comes or not. Each time a signal comes from any dendrite it is "accumulated" within the neurode until a certain number have arrived, the threshold. When this threshold is reached a signal is sent down the neurode's axon, which then branches off any number of times so that the signal will be received by any number of other neurodes. Again this signal carries no content or value.
- In an artificial neural net, specifically a feedforward neural net, each neurode is part of a layer and each layer is processed synchronously before all results are passed on to the next layer. Therefore signals are not sent and received but instead the neurode is implemented as a function call or method taking as its parameters an array of values. Each of these values comes from the axons of the previous layer or the inputs, if the value came from a previous layer it will have been adjusted by the "weight" of its connection. The function or method can be seen as having two steps: 1) add up all the weighted values 2) compare this summed value with the threshold. It seems to be up to the implementor to decide whether to weight the values as they are sent down the various axons, or when they are received by the various dendrites.
- Backpropagation is an artificial concept used to adjust the weights of the connections between neurodes, something that happens by other means in natural neural nets. Backpropagation only happens during training. While the neural net is merely operating the weights are not adjusted. During training, actual results are compared with expected results. This comparison function is confusingly named a "cost function" (presumably because it came from another field where that name was already used?) If the actual answer is pretty close to expected, connections which played a part in this round are strengthened, if the answer is pretty off, the same connections will be weakened.
How am I going so far? Or am I less clear than the jargon and formula filled version? (-:
Maybe these questions and the so-far very helpful answers can be of use in improving the article for lay readers who like me are capable of understanding but lack the scientific background the article currently seems to depend on. — Hippietrail 17:51, 10 April 2006 (UTC)
So if that's a feedforward backpropagation neural net, what would the name be for a net where all nodes are calculated at the same time (like the cells in the Game Of Life), but where there are no layers and any cells might be connected to any other cell, therefor allowing feedback and requiring that the net is run over time rather than a single iteration? — Hippietrail 17:51, 10 April 2006 (UTC)
-
- You got it right in broad terms. Some comments:
- The weights between the nodes are generally speaking not strengthened if the cost output is low and weakened if it high. Instead the weights are changed in the direction that minimizes the cost function. In standard gradient based learning this is achieved by changing the weights proportionally to the cost function gradient (error gradient) dE/dW. (The weight is changed in proportion to how much the error (cost) function changes when you change the weight)
- You got it right in broad terms. Some comments:
-
-
- If done properly, the adaptation should be as local as possible relative some sort of signal flow through the nodes. The concept of layers is an implementational convenience that more advanced ANN engines are not restricted to. Generally speaking, if you have no recurrent connections, it's a static feed forward net while if you have feedback loops, it's a recurrent net.
-
-
- Here are a few examples:
- "Standard" two layer feed forward neural network. I = signal input, W,W2 = weights, AF1,AF2 = activation functions. C is the comparer (cost) and DO is the desired output.
- Of course, with a more advanced development environment, there's no need to restrict feed forward networks to the straight layered type. For instance you could do something like this:
-
- It's still a feed-forward network that can be used with backprop and gradient updates. Of course, nothing prevents you from throwing in some non-adaptive elements or other types of elements such as Kohonen maps.
- And then if you start making recurrent nets with feedback loops, then you can really go crazy with topology:
- This is for instance a recurrent feed forward neural net. Regular backprop can't be used in this case, but there are dynamic versions of the algorithm that work (backprop-trough-time for instance). Note that the adaptation algorithm (i.e the gradient based learning can still be used).
- Bottom line, given the right tools you can pretty much hook up your neural net any way you want. Different topologies can produce rather interesting results - in an academic sense. For the vast majority of real-world problems topology is not that relevant. --Denoir 08:54, 11 April 2006 (UTC)
-
- Thanks, and some other remarks I would make:
- 1. Unlike natural NNs, the node functions of ANNs are not limited to threshold (step) functions. There may be many different kinds.
- 2. ANNs are not synchronous necessarily. They may be implemented in analog systems. Also in digital systems you don’t need to think them as layers of cells, just think of the flow of the signal. Of course when a signal reachs a node, it should activate its function. Please notice that some networks are not layered (like Denoir) example. The best practice for these networks is to follow the signal.
- 3. Some people divide training processes into two categories: offline and online. In offline training, parameters (like weight, bias, and …) are adjusted during training. But in online training, the network is adapted to new environment conditions and the parameters are adjusted according to new desires.
- 4. The name cost function comes from the fact that it’s an error function that we like to minimize. Actually learning in ANNs is an optimization (specifically a minimization) process. The cost function is a function like E(a, b, c, …) which E is error (difference between actual output and desired output) and a, b, c, and … are parameters of network (and not input). The training algorithm should find these parameters values so that minimize the error E. --Neshatian 11:13, 11 April 2006 (UTC)
[edit] online learning link
In part Background->Learning there's a link to online learning. "online learning" gets redirected to E-learning, which most probably isn't what is meant in this (ANN learning) context. Perhaps some qualified person could fix this (creating a new article 'Online_learning_(ANN)' or link to the correct article) since I don't know what is meant by online learning here. —The preceding unsigned comment was added by Fiveop (talk • contribs).
- That's right. I removed the link. The link was a redirect to e-learning. Sometimes (rarely), e-learning is called online learning because, learners use their computer and network to attend a class online and learn the subjects ‘’online’’. Although e-learning has very broader meaning. In the context of ANNs, ‘’online learning’’ means that the environment or the nature of problem is changing, so we need some sort of adaptation. This is simply achieved by assuming last n actual data as desired set (training set), and adjust network parameters (learning) in each iteration (or from time to time). This is opposite of offline learning in which once the network has been trained, it is used for its application and the parameters are not changed anymore. --Neshatian 14:29, 23 April 2006 (UTC)
[edit] External links cleanup
I have cleaned up the external links a bit by removing the software links. There are two reasons for this. First of all the selection was pretty arbitrary, and there is lots of software out there. Second, we do have a neural network software article. I think however that instead of piling up links indiscriminately it would be good to stick to the simple principle of adding a link if there is an article describing the software. --Denoir 05:55, 1 July 2006 (UTC)
[edit] Request for English
Would someone be able to add an English description of each learning rule? 202.20.73.30 02:36, 8 August 2006 (UTC)
[edit] Single Layer perceptron
I found the following to be a little vague
In the literature the term perceptron often refers to networks consisting of just one of these units.
what is 'these' refering to? the neurons or the networks Paskari 17:10, 29 November 2006 (UTC)
[edit] ADALINE
I have a sneeking suspicion that whoever wrote this section, copy pasted it from another site. I am creating an ADALINE page, in hopes of simplifying it. Hopefully my efforts won't be in vain Paskari 17:46, 1 December 2006 (UTC)