User:Paskari
From Wikipedia, the free encyclopedia
Hello this is a work in progress. I'm hoping to do all my research on this site, such that other students may benefit from my findings, and hopefully, they can help me as well
I am doing my PhD research in the field of artificial intelligence, most specifically pertaining to Neural Plasticity
If you need to contact me, my e-mail address is paskari007@yahoo.ca
[edit] Wiki Pet Peeves
Here is a list of things that really pisses me off on certain wiki pages
- pages with no pictures whatsoever
- pages with too high a level of language
- pages where the train of thought is hard to follow
[edit] School
I am a first year PhD student at the University of Southampton in the department of Electronics and Computer Science. I am studying artificial intelligence, but my main area of research is in Biologically inspired plasticity in a silicon cortex.
I receieved my undergraduate degree at York University in computer science in 2006.
[edit] Studies
I am currently involved in a project with the University of Manchester, creating a multichip computer called Spinnaker. We are hoping to model biological plasticity on a silicon cortex.
[edit] Motivation
I wanted to create this page for several reasons:
- The wiki mark up language is incredibly powerful.
- I am hoping other people will guide me in my studies
- Notebooks lack the concept of a link
I am hoping to put together enough knowledge into this site, and get enough feedback from other people, that will allow me to better model neural plasticity on a computer
[edit] Goal
My PhD requires that I build a computer with the following attributes
- models virtual neurons
- neuronal interaction carried out by virtual synapses
- virtual NGF ensures plasticity of neurons
[edit] Nature of Challenge: Putting Plasticity in a Silicon Cortex
As per our November 21st meeting, my supervisor has requested that I write an informal document to outline some of the challenges my PhD will eventually address. I am to present an outline December 4 and have it completed around December 15 The document must be 4000 words (no more, no less, and each word must be polysylabic), and must answer the below three questions:
- What is plasticity in Bioligy
- What are the models around for plasticity
- General models?
- Network implementations
- What are the challanges of running them tractably
- I am probably going to run through all of them and include running times.
[edit] Current Implementations
- Networks
- Nodes
[edit] Report 1
[edit] What is plasticity in Biology
Biological plasticity is the ability of an organism to adapt its biological structure to conform to certain environmental changes. Determining what causes this adaptation has proven to be nontrivial, even with the recent advances in biology and neuroscience. This paper will discuss some of the models which have been proposed for the plasticity of the human brain, as well as different computational models.
[edit] NeuroPlasticity
Starting off from a very high level view of biological plasticity we have neuroplasticity, which can be considered to be the plasticity of the brain (although its principles also apply to other areas of the body were nerves interact with muscle tissue). An intriguing aspect of neuroplasticity is that not only can certain clusters of the human brain alter their concentrations and firing rates, but they can migrate altogether, to a different area of the brain, with repeated learning. Studies have shown that cortical maps (certain parts of the body, like the hands, being mapped to particular parts of the brain, like the cerebellum) can alter over time, as is the case when the brain repairs itself from trauma, or from the loss of a limb. Most of the research in this field, however, has been constrained to synaptic plasticity, as neuroscientists are interested in understanding how the brain functions at its lowest levels.
[edit] Synaptic Plasticity
Neuroplasticity describes how the human brain manages to learn over time, and adapt to its environment, and it does a very good job at that. Unfortunately, it is not particularly helpful to computer scientists because it takes an approach which is too high level. It does not present model which can be directly mapped onto a computer, instead it provides a means of reasoning, and expects the computer scientists to fill in the gaps. It comes as a bit of a surprise then, that it was a psychologist by the name of Donald Hebb who first proposed the existence of synaptic plasticity in his famous learning rule which can be paraphrased as "cells that fire together, wire together" . He believed that two neurons that are simultaneously active when the presynaptic neuron fires are likely to strengthen their connection together.
When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased
In 1986 Hebb's hypothesis was further corroborated by two scientists Montalcini and Cohen who discovered what they coined to be the nerve growth factors which strengthen the bond between two neurons. They postulated that the following process would take place when a presynaptic neuron A excited a postsynaptic neuron B
- Neuron A spikes
- Neurotransmitters release charge in form of ions into synaptic cleft
- Neuron B releases nerve growth factor into the synaptic cleft
- Nerve growth factor binds onto TrkA receptors of neuron A
It is postulated that it is these nerve growth factors which increase or decrease the strength of neuronal cohesion. Another consequence of this is that a presynaptic neuron with a rapid firing rate will be more tightly coupled with its postsynaptic neuron, then had it fired less rapidly. In a sense we find that the firing rate also affects the strength of the neurons
[edit] Spike timing dependent plasticity
This is a relatively new and incredibly amazing discovery made by Henry Markram in 1994, it can be considered to be an extension of the Hebb learning rule. Hebb claimed that neurons strengthen their bonds together if they are simultaneously active during excitation, however, when Hebb made that statement, the technology to accurately measure time was not as precise as it is now. Markram discovered that the optimal situation for two neurons was not to simultaneously be active, but for a very slight window to be present. Markram postulated that synapses increase their efficacy if the presynaptic neuron is activated momentarily before the postsynaptic neuron is activated, momentarily here referring to a window of 5-40 ms. This model does not refute the previous model of plasticity, whereby nerve growth factors are attributed to being the main reason for increasing synaptic efficacy, it merely takes a higher level view of the situation, assumes the nerve growth factors are taking place in the background, and presents a model for the optimization of synaptic efficacy.
[edit] Polychronization
In 2006 Izhikevich proposed a radical new approach to understanding how neurons arrange themselves. Prior to this paper, it was assumed that presynaptic neurons worked independently of other presynaptic neurons, and, therefore, only increased their efficacy by correctly timing their spike on par with the Spike timing dependant plasticity rule. Izhikevich presents a new model which proposes that neurons can work together and excite a (single) postsynaptic neuron to achieve a postsynaptic response which is greater than had they all acted independently. Izhikevich's paper helps to explain the presence of percise spike timing dynamics in the brain, even with the presence of axonal delays. He proposes a new term, polychrony, which Hebb no doubt would describe as "cells that arrive together, wire together". An important note is that neurons need not be synchronized (fire together) to arrive at the postsynaptic cell simultaneously. Due to axonal delays, certain presynaptic neurons must fire at different times, in order for all of them to arrive at the target simultaneously. Izhikevich puts forth the notion that neurons can form 'polychronized groups' and act collectively.
An interesting, and often overlooked, consequence of this is that through subset construction there can be many more polychronized groups than neurons in the brain, since each neuron can be in more than one polychronized group. If one adds to this the fact that delays in propagation are also factored into this equation, then we get a near infinite system. Unfortunately, this makes it much more computationally intractable
[edit] What are the models around for plasticity
We will constrain ourselves to artificial neural networks which possess learning algorithms, and we will look at both feed forward and recurrent nets, in both supervised and unsupervised environments.
[edit] Single Layer Perceptron
An extremely basic form of feed forward network where the input is assigned a weight, and fed out to the output. The weights are updated according to the following function
- w(j + 1) = w(j) + α(δ − y)x(j)
where:
- x(j) denotes the j-th item in the input vector
- w(j) denotes the j-th item in the weight vector
- y denotes the output
- δ denotes the expected output
- α is a constant and 0 < α < 1
therefore, the weights are only updated when the output differs from the desired output. A consequence of this is that a Perceptron incorporates supervised learning.
[edit] Multi-layer perceptron
The problem with single layer perceptrons is that they can only solve linearly separable problems. A multi-layer perceptron generally includes a hidden layer with different thresholds for its nodes and it is not confined only to linear separable problems. Both the single layer and multi layer perceptron are simple designs to program, and are computationally tractable. The running time is constrained by the dot product which is O(n2)
[edit] Adaline
Adaline is an extension of the Perceptron model, the only difference being that that in ADALINE we achieve efficiency by minimizing the least squares error function:
- d is the desired output
- o is the actual output
E = (d − o)2
[edit] Radial basis function
?
[edit] Kohonen self-organizing network
These are really interesting because they are unsupervised. All the previous network examples involved examining the output, and comparing it to some desired output. In the self organizing map, neurons learn to map information from the input space to output coordinates. Interestingly enough, all this can be done without any knowledge of what the expected end result should be, and the input and the output don't need to have the same dimensions. What happens is that the neurons produce a low level representation of the input data, but still manage to preserve the topological structure of the input. In a Kohenen network each input unit is connected to every neuron, which creates a lot of edges. It is loosely based on how the visual systems handles information, the kohenen network associates different parts of its map to respond similarly to certain input. The input is assigned to a node as follows
- when the input arrives its euclidian distance to all weight vectors is computed
- BMU (best matching unit) neuron is designated to it, being the neuron with the most similar weight vector
- The weights of the BMU are adjusted towards the input unit
- The weights of the neurons close to the BMU are adjusted towards the input (decreasing intensity with distance)
once the network has iterated over a large number of input cases, called the training process, the mapping process generally becomes routine. The computational constraint of Kohenen maps is the incredibly high number of edges required between the inputs and all the nodes.
[edit] Simple recurrent network
This is a standard feed forward net, much like a perceptron, with an additional layer alongside the input unites called the context units. Every middle layer has a connection to the context units, and after every iteration the context units are continually updated. Therefore, when the output is back propagated and the learning rule applied, the network now has the previous input units as well, and it is in a better position to make predictions.
[edit] Hopfield Network
This is just a recurrent network where all the connections are symmetric. The symmetry ensures that the system will never engage in chaotic behaviour.
[edit] Stochastic neural networks
Just like a Hoppfield network except it is stochastic.
[edit] What are the challanges of running them tractably
The greatest challenge of simulating the brain on a computer is that we are simulating it on a computer. To illustrate a picture consider this:
- the worlds fastest computer can handle 360 tera FLOPS whereas the most malnourished human being can do 10 peta FLOPS
In other words, the human brain can handle thirty times as much information. Another problem is that we don't fully understand how the human brain works, therefore, it is almost futile to try to model such a fast processing unit onto an inferior system. My personal opinion is that computers are poorly designed. We are trying to take machines which only see the world in one dimension, and only work in units of either one or the other (binary), and we would like to model on that a parallel processing, chemical based system. Even if we do manage to precisely create a mapping of the chemicals and their affects to a table and its appropriate functions, we're still left with the problem that we have a binary machine that's only capable of sequential processing. If the processor was capable of say a quadrillion FLOPS, then we could model 100 billion neurons easily, by assigning 10,000 FLOPS for simulation of a neuron. But that's assuming we fully understand how a neuron works. We could use those 10,000 FLOPS to model the neuron with dendrites, synapses, neurotransmitters, ion channels, lipid bilayer and axonal delay, but what good is neurons interacting with one another, if they don't produce an output? If they don't produce a desirable output? We still don't understand exactly how memory is stored. In a perfect world we'd be able to follow all of the spikes shooting off as a human being tries to solve a rubik's cube. I sometimes wonder if scientists stumbled onto digital computers, and against their will and better judgement chose to stick with it. I personally believe that the brains greatest asset is that fact that it doesn't disassociate between random memory and memory store. The brain is very versatile in that it uses the same paths for signal transmission, for memory retention. It pains me that we have models such as polychronized groups, but as of yet we can't use them because of the shortcomings of the digital computer. Perhaps we can still look forward to the possibilities of the quantum computer or the digital computer. brain vs computer
[edit] Report 2
In this second report I have to present a biological model which I can map onto the spinnaker project. The model has to be chosen from one of the options below:
- an insect
- a simple animal
- a specific region of an animals brain
any help anyone could give would be greatly appreciated
[edit] Cities I've visited
Monte Carlo
[edit] pages to create images for
[edit] Books
here is a list of books I have recently read (updated regularly)
- In Search of Enemies (John Stockwell)
- Rogue States (Noam Chomsky)
- Hegemony or Survival: America's Quest for Global Dominance (Noam Chomsky)
- Imperial Ambitions (Noam Chomsky)
- Failed States (Noam Chomsky)
[edit] See Also
Dynamical system
Polychronization
Blue Gene
Spike timing dependent plasticity
Connectionism
Self-organizing map
Self Organizing map 2
Self Organizing map 3
Self Organizing map 4
Brain-computer interface
Perceptron
Radial Basis Function
Glial cell Metaplasticity
[edit] Suggested Papers
Letter to the editor
Neural Networks and Physical Systems (Hopfield)