Random neural network

From Wikipedia, the free encyclopedia

The random neural network (RNN) is a mathematical representation of neurons or cells which exchange spiking signals. Each cell is represented by an integer whose value rises when the cell receives an excitatory spike and drops when it receives an inhibitory spike. The spikes can originate outside the network itself, or they can come from other cells in the networks. Cells whose internal excitatory state has a positive value are allowed to send out spikes of either kind to other cells in the network according to specific cell-dependent spiking rates. The model has a mathematical solution in steady-state which provides the joint probability distribution of the network in terms of the individual probabilities that each cell is excited and able to send out spikes. Computing this solution is based on solving a set of non-linear algebraic equations whose parameters are related to the spiking rates of individual cells and their connectivity to other cells, as well as the arrival rates of spikes from outside the network. The RNN also has a gradient-based learning algorithm whose computational complexity is proportional to the cube of the number of cells, and other algorithms such as reinforcement learning can also be used. The RNN has also been shown to be a universal approximator for bounded and continuous functions.

[edit] References

  • E. Gelenbe, Random neural networks with negative and positive signals and product form solution, Neural Computation, vol. 1, no. 4, pp. 502-511, 1989.
  • E. Gelenbe, Stability of the random neural network model, Neural Computation, vol. 2, no. 2, pp. 239-247, 1990.
  • E. Gelenbe, A. Stafylopatis, and A. Likas, Associative memory operation of the random network model, in Proc. Int. Conf. Artificial Neural Networks, Helsinki, pp. 307-312, 1991.
  • E. Gelenbe, F. Batty, Minimum cost graph covering with the random neural network, Computer Science and Operations Research, O. Balci (ed.), New York, Pergamon, pp. 139-147, 1992.
  • E. Gelenbe, Learning in the recurrent random neural network, Neural Computation, vol. 5, no. 1, pp. 154-164, 1993.
  • E. Gelenbe, V. Koubi, F. Pekergin, Dynamical random neural network approach to the traveling salesman problem, Proc. IEEE Symp. Syst., Man, Cybern., pp. 630-635, 1993.
  • E. Gelenbe, C. Cramer, M. Sungur, P. Gelenbe Traffic and video quality

in adaptive neural compression, Multimedia Systems, Vol. 4, pp. 357-369, 1996.

  • C. Cramer, E. Gelenbe, H. Bakircioglu Low bit rate video compression with neural networks and temporal sub-sampling, Proceedings of the IEEE, Vol. 84, No. 10, pp. 1529-1543, October 1996.
  • E. Gelenbe, T. Feng, K.R.R. Krishnan Neural network methods for volumetric magnetic resonance imaging of the human brain, Proceedings of the IEEE, Vol. 84, No. 10, pp. 1488-1496, October 1996.
  • E. Gelenbe, A. Ghanwani, V. Srinivasan, Improved neural heuristics for multicast routing, IEEE J. Selected Areas in Communications, vol. 15, no. 2, pp. 147-155, 1997.
  • E. Gelenbe, Z. H. Mao, and Y. D. Li, Function approximation with the random neural network, IEEE Trans. Neural Networks, vol. 10, no. 1, January 1999.
  • E. Gelenbe, J.M. Fourneau Random neural networks with multiple classes of signals, Neural Computation, vol. 11, pp. 721-731, 1999.
  • E. Gelenbe, Z.-H. Mao and Y-D. Li "Functions approximation by random neural networks with a bounded number of layers", Differential Equations and Dynamical Systems, Vol. 12, 1&2, pp. 143-170, Jan. April 2004.