Sparse distributed memory

Sparse distributed memory is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988. It is used for storing and retrieving large amounts (2^{1000} bits) of information without focusing on the accuracy of the information. It uses patterns to serve as memory addresses, where information is retrieved based on similarities between addresses. Memory addresses are all in a list even if they are not related, and are retrieved based on similar content.[1]

Formula

The general formula is 2^n where n is the number of dimensions of the space, and 2^n is the number of feasible memory items.[1]

Critical Distance

The critical distance of a Sparse Distributed Memory can be approximately evaluated minimizing the following equation with the restriction d \in N and d \leqslant n. The proof can be found in,[2][3]


\tilde{f}(d) = \left\{ \frac{1}{2} \cdot \left[ 1 - N \left( z < \frac{w \cdot shared(d)}{\sqrt{\theta}} \right) + N \left( z < \frac{- w \cdot shared(d)}{\sqrt{\theta}} \right) \right] - \frac{d}{n} \right\}^2

Where:

Definition

Principle

Sparse distributed memory is a mathematical representation of human memory, and uses high-dimensional space to help model the large amounts of memory that mimics that of the human neural network.[4] It utilizes the Hamming distance to measure mismatched bits and read back data between the original write address and one near it.[5] Human memory has a tendency to congregate memories based on similarities between them(although they may not be related), such as "firetrucks are red and apples are red".[6]

Neurons

Neurons are electrically excitable cells that transmit information to and from the brain. They are used as models for sending and retrieving data in a sparse distributed memory system. Neurons are the cells that recall and send information in a memory system.[7]

Computers

Memory inside of a computer is random-access memory(RAM), contrary to sequential access memory. All of the items in a single list, or array, are stored in RAM. The computer has address decoders, similar to the way neurons work in the brain, and return items from the array that match or are similar. Each address in an array points to an individual line in the memory. That line is then returned if it is similar to other lines.

Example

Sparse distributed memory is based on pulling in patterns between different addresses.

Imagine each line as a different memory address, an example from Kanerva's book:

"Why are fire engines painted red?
Firemen's suspenders are red, too.
Two and two are four.
Four times three is twelve.
Twelve inches in a foot.
A foot is a ruler.
Queen Mary is a ruler.
Queen Mary sailed the sea.
The sea has sharks.
Sharks have fins.
The Russians conquered the Finns.
The Russians' color is red.
Fire engines are always rushin'.
So that's why they're painted red!"[1]

As a result, all of these addresses are returned to the user, although these may not be the only addresses in that list.

Uses

"Realizing forgetting"

Decay Functions
The exponential decay function
The negated-translated sigmoid function

At the University of Memphis, Uma Ramamurthy, Sidney K. D’Mello, and Stan Franklin created a modified version of the sparse distributed memory system that represents "realizing forgetting." It uses a decay equation to better show interference in data. The sparse distributed memory system distributes each pattern into approximately one hundredth of the locations, so interference can have detrimental results.[8]

Two possible examples of decay from this modified sparse distributed memory are presented

Exponential decay mechanism: \!f(x)=1+e^{-ax}

Negated-translated sigmoid decay mechanism: f(x)=1-[\frac{1}{1+e^{-a(x-c)}}]

In the exponential decay function, it approaches zero more quickly as x increases, and a is a constant(usually between 3-9) and c is a counter. For the negated-translated sigmoid function, the decay is similar to the exponential decay function when a is greater than 4.[8]

As the graph approaches 0, it represents how the memory is being forgotten using decay mechanisms.

Genetic memory

Genetic memory uses genetic algorithm and sparse distributed memory as an artificial neural network. It has been considered for use in creating artificial life.[9]

LIDA

LIDA uses sparse distributed memory to help model cognition in biological systems. The sparse distributed memory places space is recalling or recognizing the object that it has in relation to other objects. It was developed by Stan Franklin, the creator of the "realizing forgetting" modified sparse distributed memory system.[10]

References

  1. 1.0 1.1 1.2 Kanerva, Pentti (1988). Sparse Distributed Memory. The MIT Press. ISBN 978-0-262-11132-4.
  2. Brogliato, Marcelo Salhab (2012). Understanding Critical Distance in Sparse Distributed Memory (Thesis).
  3. Brogliato, Marcelo Salhab; Chada, Daniel de Magalhães; Linhares, Alexandre (2014). "Sparse Distributed Memory: understanding the speed and robustness of expert memory". Frontiers in Human Neuroscience 8 (222). doi:10.3389/fnhum.2014.00222.
  4. Pentti Kanerva (1993). "Sparse Distributed Memory and Related Models". Pennsylvania State University. CiteSeerX: 10.1.1.2.8403.
  5. M. J. Flynn, P. Kanerva, and N. Bhadkamkar (December 1989). "Sparse Distributed Memory: Principles and Operation". Stanford University. Retrieved 1 November 2011.
  6. C. George Boeree (2002). "General Psychology". Shippensburg University.
  7. Mastin, Luke. "NEURONS & SYNAPSES". Retrieved 10 November 2011.
  8. 8.0 8.1 Uma Ramamurthy, Sidney K. D’Mello, Stan Franklin. "Realizing Forgetting in a Modified Sparse Distributed Memory System". Computer Science Department and The Institute for Intelligent Systems. The University of Memphis. pp. 1992–1997. Archived from the original on 2006. Retrieved 1 November 2011.
  9. Rocha LM, Hordijk W (2005). "Material representations: From the genetic code to the evolution of cellular automata". Artificial Life 11 (1-2): 189–214. doi:10.1162/1064546053278964. PMID 15811227.
  10. Rao, R. P. N., & Fuentes, O. (1998). Hierarchical Learning of Navigational Behaviors in an Autonomous Robot using a Predictive Sparse Distributed Memory. Machine Learning, 31, 87-113