Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in 1949, it is also called Hebb's rule, Hebb's postulate, and cell assembly theory, and states:
The theory is often summarized as "Cells that fire together, wire together."[1] It attempts to explain "associative learning", in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Such learning is known as Hebbian learning.
Contents |
Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following:
Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:
Hebbian theory has been the primary basis for the conventional view that when analyzed from a holistic level, engrams are neuronal nets or neural networks.
Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica.
Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms
From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.
This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today, the term Hebbian learning generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such, many learning methods can be considered to be somewhat Hebbian in nature.
The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)
where is the weight of the connection from neuron to neuron and the input for neuron . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections are set to zero if (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.
Another formulaic description is:
where is the weight of the connection from neuron to neuron , is the number of training patterns, and the th input for neuron . This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections are set to zero if (no reflexive connections).
A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. Klopf's model reproduces a great many biological phenomena, and is also simple to implement.
Hebb's Rule is often generalized as
or the change in the th synaptic weight is equal to a learning rate times the th input times the postsynaptic response . Often cited is the case of a linear neuron,
and the previous section's simplification takes both the learning rate and the input weights to be 1. This version of the rule is clearly unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. However, it can be shown that for any neuron model, Hebb's rule is unstable. Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule,[2] or the Generalized Hebbian Algorithm.
|