Hierarchical Temporal Memory

From Wikipedia, the free encyclopedia

Hierarchical Temporal Memory (HTM) is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex using an approach somewhat similar to Bayesian networks. HTM model is based on the memory-prediction theory of brain function described by Jeff Hawkins in his book On Intelligence. HTMs are claimed to be biomimetic models of cause inference in intelligence.

While criticized by the AI community as rehashing existing material (for example, in the December 2005 issue of the Artificial Intelligence journal[citation needed]), the model is quite novel[citation needed] in proposing functions for cortical layers. As such it is related to similar work by Tomaso Poggio and David Mumford amongst others.

Contents

[edit] Similarity to other models

[edit] Bayesian Networks

An HTM can be considered a form of Bayesian network where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input through a process of finding common spatial patterns and then finding common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data, and afford mechanisms for covert attention.

[edit] Neural Networks

Numenta's Director of Developer Services addressed how HTMs differ from neural networks.

First of all, HTM's are a type of neural network. But in saying that, you should know that there are many different types of neural networks (single layer feed-forward network, multi-layer network, recurrent, etc). 99% of these types of networks tend to emulate the neurons, yet don't have the overall infrastructure of the actual cortex. Additionally, neural networks tend not to deal with temporal data very well, they ignore the hierarchy in the brain, and use a different set of learning algorithms than our implementation. But, in a nutshell, HTMs are built according to biology. Whereas neural networks ignore the structure and focus on the emulation of the neurons, HTMs tend to focus on the structure and ignores the emulation of the neurons.

[edit] Implementation

The HTM idea has been implemented in a research release of a software API called "Numenta Platform for Intelligent Computing" (NuPIC). Currently, the software is available as a free download and can be licensed either for general research, or for academic research.

The implementation is written in C++ and Python.[citation needed]

[edit] See also

[edit] Related models

[edit] References

[edit] External links

[edit] Official

[edit] Other

Languages