Tensor product network

A tensor product network, in artificial neural networks, is a network that exploits the properties of tensors to model associative concepts such as variable assignment. Orthonormal vectors are chosen to model the ideas (such as variable names and target assignments), and the tensor product of these vectors construct a network whose mathematical properties allow the user to easily extract the association from it.

Ranked Tensors

A rank 2 tensor can store an arbitrary binary relation

Teaching Mode

The network learns which variables have fillers (symbols) when vectors representing a variable and a filler are presented to the two sides of the network The teaching is one-shot (vs iterative learning used by backprop & other settling schemes), whereby nothing is annealed or repeatedly adjusted, and no stopping criterion applies

Method

Teaching is accomplished by adjusting the value of the binding unit memory. If the i-th component of the filler vector is fi and the j-th component of the variable vector is vj Then fivj is added to bij (the (i,j)-th binding unit memory) for each i and j

Similarly, regard the binding units as a matrix B, and the filler and variable as column vectors f and v. Then what we are doing is forming the outer product fvT and adding it to B B'=B+fvT

Retrieval Mode

For exact retrieval:

If the matrix/tensor has m rows and n columns, then it can represent at most m fillers and n variables

Method

To retrieve the value/filler for a variable (vj) from a rank 2 tensor with binding unit values bij: compute fi=jbijvj for each i, where the resulting vector fi represents the filler To compute whether variable variable (vj) has filler (fi): computer D=ijbijvjfi, where D is a boolean (1 or 0)

See also

This article is issued from Wikipedia - version of the Sunday, June 01, 2014. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.