UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Self-orthogonalizing strategies for enhancing Hebbian learning in recurrent neural networks Davenport, Michael R.

Abstract

A neural network model is presented which extends Hopfield's model by adding hidden neurons. The resulting model remains fully recurrent, and still learns by prescriptive Hebbian learning, but the hidden neurons give it power and flexibility which were not available in Hopfield’s original network. The key to the success of the model is that it uses the emerging structure of its own memory space to establish a pattern in the hidden neurons such that each new memory is optimally orthogonal to all previous memories. As a result, the network actually learns a memory set which is "near-orthogonal", even though the visible components of the memories are randomly selected. The performance of the new network is evaluated both experimentally, using computer simulations, and analytically, using mathematical tools derived from the statistical mechanics of magnetic lattices. The simulation results show that, in comparison with Hopfield's original model, the new network can (a) store more memories of a given size, (b) store memories of different lengths at the same time, (c) store a greater amount of information per neuron, (d) retrieve memories from a smaller prompt, and (e) store the XOR set of associations. It is also shown that the memory recovery process developed for the new network can be used to greatly expand the radius of attraction of standard Hopfield networks for "incomplete" (as opposed to "noisy") prompts. The mathematical analysis begins by deriving an expression for the free energy of a Hopfield network when a near-orthogonal memory set has been stored. The associated mean-field equations are solved for the zero-temperature, single-recovered-memory case, yielding an equation for the memory capacity as a function of the level of orthogonality. A separate calculation derives a statistical estimate of the level of orthogonality that can be achieved by the roll-up process. When this is combined with the memory capacity-vs-orthogonality result, it yields a reasonable estimate of the memory capacity as a function of the number of hidden neurons. Finally, the theoretical maximum information content of sets of near-orthogonal memories is calculated as a function of the level of orthogonality, and is compared to the amount of information that can be stored in the new network.

Item Citations and Data

Rights

For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.