The Hopfield model is an artificial neural network designed to model the memory recall process of the brain. It can recover a perfect image or memory when presented with only a part of the original memory. It is also robust in that connections between nodes can be altered to some degree without causing a catastrophic loss of memories. However, the brain is much more than a memory storing device - it has a processing capability. The brain receives input from various sensory sources, extracts certain features from this information, and by comparing this processed information with past experience, can formulate new actions.
To illustrate these ideas, consider the visual system of the frog. The frog possesses sets of nerve cells just behind the retina whose function is to discriminate only the four following events:
The first three events put the frog into a state of alert. The first case can be interpreted as the arrival of an intruder. The second case involves the intruder stopping and the danger becoming real. The third case can be interpreted as the arrival of a predator which is overshadowing the frog. All three cases give rise to the "escape" response. The last case suggests an insect is close and it causes an attack by the frog regardless of whether or not there is really prey there. The responses of the frog, attack or flight, are triggered entirely visually. So, the visual neurons of the frog are "wired-up" in order that, when they receive a picture of the frog's environment from its eyes, that information is processed into one of the four predetermined possibilities. This information is then sent to the rest of the brain, in order to produce a response. This feature of being able to extract certain simple features from perhaps a very complex image is commonly referred to as pattern recognition. It is a crucial feature of the brain which allows it to make sense of a very complex and ever changing world.
The Perceptron - a network for decision making
An artificial neural network which attempts to emulate this pattern recognition process is called the Perceptron. In this model, the nodes representing artificial neurons are arranged into layers. The signal representing an input pattern is fed into the first layer. The nodes in this layer are connected to another layer (sometimes called the "hidden layer"). The firing of nodes on the input layer is conveyed via these connections to this hidden layer. Finally, the activity on the nodes in this layer feeds onto the final output layer, where the pattern of firing of the output nodes defines the response of the network to the given input pattern. Signals are only conveyed forward from one layer to a later layer - the activity of the output nodes does not influence the activities on the hidden layer.
In contrast to the Hopfield network, this network produces its response to any given input pattern almost immediately - the firing pattern of the output is automatically stable. There is no relaxation process to a stable firing pattern, as occurs with the Hopfield model.
To try to simplify things, we can think of a simple model in which the network is made up of two screens - the nodes on the first (input) layer of the network are represented as light bulbs which are arranged in a regular pattern on the first screen. Similarly, the nodes of the third (output) layer can be represented as a regular array of light bulbs on the second screen. There is no screen for the hidden layer - that is why it is termed "hidden"! Instead we can think of a black box which connects the first screen to the second. Of course, the magic of how the black box will function depends on the network connections between hidden nodes which are inside. When a node is firing, we show this by lighting its bulb. See the picture for illustration.
We can now think of the network functioning in the following way: a given pattern of lit bulbs is set up on the first screen. This then feeds into the black box (the hidden layer) and results in a new pattern of lit bulbs on the second screen. This might seem a rather pointless exercise in flashing lights except for the following crucial observation. It is possible to "tweak" with the contents of the black box (adjust the strengths of all these internode connections) so that the system can produce any desired pattern on the second screen for a very wide range of input patterns. For example, if the input pattern is a triangle, the output pattern can be trained to be a triangle. If an input pattern containing a triangle and a circle is presented, the output can be still arranged to be a triangle. Similarly, we may add a variety of other shapes to the network input pattern and teach the net to only respond to triangles. If there is no triangle in the input, the network can be made to respond, for example, with a zero.
In principle, by using a large network with many nodes in the hidden layer, it is possible to arrange that the network still spots triangles in the input pattern, independently of what other junk there is around. Another way of looking at this is that: the network can classify all pictures into one of two sets - those containing triangles and those which do not. The perceptron is said to be capable of both recognizing and classifying patterns.
Furthermore, we are not restricted to spotting triangles, we could simultaneously arrange for the network to spot squares, diamonds or whatever we wanted. We could be more ambitious and ask that the network respond with a circle whenever we present it with a picture which contains both triangles, squares but not diamonds. There is another important task that the perceptron can perform usefully: the network may be used to draw associations between objects. For example, whenever the network is presented with a picture of a dog, its output may be a cat. Hopefully, you are beginning to see the power of this machine at doing rather complex pattern recognition, classification and association tasks. It is no coincidence, of course, that these are the types of task that the brain is exceptionally good at.
Prev: Memory -Hopfield Net Next: Training Perceptron Neural Nets.
Home | Contact Us | Products | Pricing | How To Order | Product Specifications | Links |
Copyright 1995 - 2009 Intelegen Inc. All rights reserved