++
The sight of a familiar face evokes a name. A simple odor triggers the vivid recollection of a past meal and the persons who were there. These everyday experiences illustrate that the facts and ideas stored in our memories are associated with each other. Philosophers and psychologists have argued that association is the basic principle of all mental activity. Neuroanatomists have studied the way that neurons are bound together in a web of synaptic connections. The two traditions converge in an intuitively appealing idea: Perhaps synaptic connections are the material substrate of mental associations.
++
This idea has been formalized in a number of neural network models of associative memory. A fundamental assumption in these models is that information is transferred back and forth between neural activity and synaptic connections. When novel information first enters the brain it is encoded in a pattern of neural activity. If this information is stored as memory, the neural activity leaves a trace in the brain in the form of modified synaptic connections. The stored information can be recalled when the modified connections again become active. This scheme assumes that synaptic connections remain stable for long periods of time, whereas neural activity is ephemeral and represents immediate experience only.
++
The transfer of information from neural activity to synapses is hypothesized to occur through Hebbian synaptic plasticity: A long-lasting increase in synaptic efficacy is induced if the presynaptic neuron repeatedly participates in the firing of its postsynaptic neuron (Box E–3). Some prominent forms of long-term potentiation involving the NMDA-type glutamate receptor are regarded as Hebbian (see Chapter 67). Conversely, the transfer of information from synapses to neural activity is thought to occur through a process of pattern completion in which activity spreads through an assembly of neurons coupled by synaptic loops. This idea is explained in more detail below.
+++
Hebbian Plasticity May Store Activity Patterns by Creating Cell Assemblies
++
How might Hebbian plasticity transfer information from neural activity into the synapses of a neural network? One scenario is illustrated in Figure E–5, which depicts a population of excitatory neurons that could represent pyramidal neurons in the hippocampus or neocortex. It is common to assume that Hebbian plasticity modifies the synapses between pyramidal neurons but does not modify synapses involving inhibitory neurons. According to this theory, inhibitory neurons play only a supporting role in memory storage and recall by helping to prevent overexcitation of the network, or "confused" recall of multiple memories at the same time. For simplicity, inhibitory neurons are not included in the model in Figure E–5.
++
++
Box E–3 Mathematical Models of Hebbian Plasticity
Associative memory networks were developed by a number of researchers.1 In their modern form they have two essential features. First, the synaptic strengths are specified by a special type of matrix, called a correlation matrix. Second, the neurons are nonlinear, which enhances the ability of the models to perform the operation of pattern completion described in the main text.2
To store an activity pattern in long-term memory in a nonlinear network of the form written in Equation E–1 in Box E–1, synaptic strengths are changed by the Hebbian rule:
This synaptic learning rule is Hebbian because it depends on the simultaneous activation of the postsynaptic neuron i and the presynaptic neuron j. (For binary neurons the change in Equation E–3 is only nonzero if x i and x j are both equal to 1.) If Equation E–3 is repeatedly applied with activity patterns drawn from an ensemble, then W ij becomes proportional to the statistical correlation between the activities of neurons i and j (hence the term correlation matrix).
A popular modification of the basic Hebbian rule is to replace Equation E–3 by the Covariance rule:
where
is the average activity of neuron i. When this is applied to an ensemble of activity patterns, W ij becomes proportional to the statistical covariance between the activities of neurons i and j.
The number of patterns that can be stored in synaptic connections is limited because the patterns eventually interfere with each other (see Figure E–6). The maximal number that can be stored is called the capacity of the network. In 1985 Daniel Amit, Hanoch Gutfreund, and Haim Sompolinsky introduced techniques from the statistical physics of disordered systems to calculate memory capacity. Later researchers used these techniques to find that the covariance rule of Equation E–4 is generally superior to the basic Hebbian rule of Equation E–3 because it reduces interference between patterns and therefore enhances storage capacity.
Physiologists have found that Hebbian plasticity can depend on the precise timing of presynaptic and postsynaptic spiking. One example of such a mechanism is spike timing-dependent plasticity (see Chapter 67). To incorporate this dependence, models more sophisticated than Equations E–3 and E–4 have been proposed (see the bibliography at the end of the appendix).
The main text of this appendix focuses on the use of the Hebbian rule in models of associative memory. However, the Hebbian rule has also been used to model the development of retinotopic maps in visual areas of cortex. Also, it is believed that Hebbian plasticity allows neuronal activity to influence the patterning and refinement of connections during neural development (see Chapter 56). In 1973 Christoph von der Malsburg advanced a neural network model of primary visual cortex in which Hebbian plasticity underlies the self-organization of orientation maps when the model network is exposed to visual stimuli.
In 1982 Teuvo Kohonen proposed a simplification of von der Malsburg's model, known as the self-organizing map (SOM). Kohonen showed how the SOM served as a general method of mapping the abstract high-dimensional space of stimuli onto a low-dimensional neural representation, as in a sheet of cortical tissue. Kohonen's learning rule causes neighboring neurons in the network to develop preferences for similar stimuli. This yields a low-dimensional map of the stimulus space based on similarities between sensory inputs.
++
++
The initial state of this network has no connections between neurons (Figure E–5A). This should not be taken literally. It depicts an initial situation in which synapses exist but are all very weak. Now suppose that three neurons are stimulated by synapses from sources outside the circuit (Figure E–5B). This situation corresponds roughly to activation of a distributed pattern of neural activity in the brain by a sensory stimulus, as is often observed in neurophysiological studies. Every synapse between a pair of active neurons is therefore exposed to coincident presynaptic and postsynaptic activity, thus strengthening the synapses.
++
After this strengthening has occurred, a group of three neurons that are strongly coupled by excitatory synapses form a cell assembly (Figure E–5C). Neuroscientists generally use this term rather imprecisely. One must look to mathematical models of networks for more precise definitions, which generally have something to do with the presence of strong mutual excitatory interactions within a group of neurons. The word "assembly" emphasizes that the group did not initially exist but was constructed through the strengthening of the synapses of the neurons in the group, which in turn was caused by the simultaneous activation of the neurons.
++
In effect, the information in the original activity pattern is transferred to the pattern of strong synapses in the cell assembly. If the synaptic changes persist, the information is maintained even after the original activity pattern has ceased. It could be said that the network has learned an activity pattern by storing it into its synaptic strengths. Because of this, the cell assembly can replicate the original activity pattern, as will be explained below.
+++
Cell Assemblies Can Complete Activity Patterns
++
If inputs are limited to one neuron in the three-cell assembly, the neuron starts to generate action potentials (Figure E–5D). Although the external inputs to the other two neurons do not change, they also become activated after a short latency because they are driven by synaptic input from the first neuron. This spreading of activation from one neuron to the other two is an example of pattern completion. Researchers have scaled up the same idea to very large networks.
++
Such a neural process is thought to be responsible for the psychological phenomenon of memory retrieval. Consider the example of seeing a friend and remembering his name and occupation. Partial information triggers recall of more information based on the completion of a neural activity pattern.
++
In the simulation in Figure E–5, stimulating any one out of the three neurons would result in completion of the entire pattern. This is a kind of symmetry and is analogous to the way in which memory retrieval can be symmetric; it is equally possible for a face to evoke recall of a name and vice versa. Symmetric pattern completion is possible for a cell assembly because of the lack of directionality in its connectivity. Activity can spread in any direction within a cell assembly except that activity in the last layer cannot spread backward to the rest of the network.
++
In the CA3 region of the hippocampus, a brain area that has been implicated in episodic memory (see Chapter 65), pyramidal neurons make synapses onto each other in a recurrent fashion, and Hebbian plasticity has been observed at these synapses. Therefore CA3 seems a prime candidate for a network containing cell assemblies, as many theorists have speculated in the past. The hypothesis that Hebbian synaptic plasticity stores memories as cell assemblies in CA3 has been investigated in studies of hippocampal place cells in rodents, the connections between which are thought to store spatial memories (see Chapter 67). Susumu Tonegawa and his colleagues created mutant mice in which a subunit of the NMDA-type glutamate receptor was deleted from CA3 pyramidal neurons. Long-term potentiation was impaired at these synapses, supporting the idea that the NMDA receptor is critical for Hebbian synaptic plasticity. Interestingly, mutant mice are still able to form spatial memories but have difficulty recalling them if some of the original visual landmarks are missing. Tonegawa and his colleagues interpreted this deficit in recall as impaired pattern completion, and ascribed it to impaired formation of cell assemblies in CA3.
+++
Cell Assemblies Can Maintain Persistent Activity Patterns
++
Up to now our discussion of memory storage in synaptic connections has focused on long-term memory. Recall of a long-term memory occurs through the reactivation of a previous activity pattern, triggered by activation of a subset of the pattern. Once the activity pattern has been reactivated it can persist even after the extrinsic drive has ended because the neurons excite each other through their mutual excitatory connections.
++
Such persistent activity could also function as a short-term memory trace of the input that activated it. Short-term memory is generally regarded as distinct from long-term memory. For example, the famous patient H.M. lost the ability to store new long-term memories but had intact short-term memory, evidence that these are two distinct functions (see Chapter 65).
++
A classic example of short-term memory is the temporary memorization of a phone number for a few seconds after reading or hearing it. After dialing the phone number the information is rapidly lost from memory. If the phone number becomes too long, as when dialing internationally, it can be difficult to retain for even a few seconds. As this example illustrates, short-term memories last for only a very short time and contain limited information. In contrast, long-term memories can last a lifetime, and our brains seem to have virtually unlimited capacity for them.
++
As described in Chapter 67, similar short-term persistent activity has been observed in the primate brain during the performance of delayed-match-to-sample tasks that are designed to test short-term memory. For example, in each trial of an experiment a monkey views a sample image on a screen, then a blank screen during a delay period, and then another image. The monkey is trained to indicate whether the second image matches the first. In the primary visual cortex neural activity is observed only when the images are presented. However, in higher-level areas, such as inferotemporal and prefrontal cortex, persistent activity is also observed during the interval between images (see Figure 28–11). By sampling many neurons during this delay period, neurophysiologists have recorded distinct activity patterns corresponding to different sample images, suggesting that these activity patterns encode information about previously viewed images.
++
To summarize, the cell assembly concept has been used to explain both long-term and short-term memory. According to this concept a long-term memory is stored as strengthened connections between neurons in a cell assembly, while a short-term memory is maintained by persistent activity of the neurons in a cell assembly. Whether these ideas are correct remains uncertain, and some of their problematic aspects will be noted later. It should also be noted that not all associative memory networks depend on persistent activity. For example, in the network in Figure E–5 some numerical parameters could be changed so that pattern completion occurs during the stimulus presentation but not after the stimulus is gone.
+++
Interference Between Memories Limits Capacity
++
Figure E–5 illustrates the storage and retrieval of a single activity pattern. In fact, however, a single network can store multiple patterns. If Hebbian synaptic modifications store multiple patterns, many cell assemblies are created. A stored pattern can be retrieved by stimulating some of the neurons in the corresponding cell assembly, leading to completion of the entire activity pattern.
++
However, the storage capacity of a network is not infinite. If the cell assemblies are completely nonoverlapping (share no neurons in common) they will not interfere with each other (Figure E–6A), and in these cases the number of patterns that can be stored is equal to the total number of neurons in the network divided by the size of a cell assembly.
++
Higher storage capacity can be achieved if the cell assemblies overlap (share neurons). However, overlap means there is the possibility of interference (Figure E–6B). Interference can lead to corruption of memories, so that the activity patterns expressed by the network deviate from the original patterns that were stored by Hebbian plasticity. If we attempt to store too many patterns in the network, interference eventually becomes catastrophic—the stored patterns disappear altogether. Therefore interference effects limit the storage capacity of the network, and mathematical theorists have studied these effects in detail.
+++
Synaptic Loops Can Lead to Multiple Stable States
++
To describe how cell assembly can maintain different types of activity patterns, we use the concept of multistability, a term from dynamical systems theory.
++
In Figure E–5 the circuit is active during interval E but quiescent during interval G. During the quiescent and active states there is no external input, yet the circuit has two very different firing patterns. Thus the network possesses two possible stable states (active and inactive) for a single input, a phenomenon known as bistability. The transient currents at interval F in Figure E–5 switch the circuit from one stable state to the other.
++
A network with multiple cell assemblies is said to be multistable because activation of any one cell assembly produces a distinct stable state of the network. When a multistable system is at a steady state, this state depends on past as well as present input. This dependence on the past explains why the transient inputs of Figure E–5 can have a lasting effect on activity.
++
Multistability is caused by the connectivity of the cell assemblies. More generally, networks that contain synaptic loops (a cell assembly is a special case of this) can have multiple stable states. In contrast, a perceptron is a type of network that has no loops and does not exhibit multistability. A network with multiple stable states is often called an attractor neural network, borrowing a term from dynamical systems theory.5 A stable steady state is called an attractor of the dynamics because dynamical trajectories (the temporal evolution of the activity of the network) that start from similar initial conditions will converge (are attracted) to the stable state.
+++
Symmetric Networks Minimize Energy-Like Functions
++
Further insight into multistability can be gained from a physical analogy. If the curved surface shown in Figure E–7 is slippery, a small object placed on the surface will slide downhill, ultimately coming to rest near the bottom of a valley, assuming that there is a little friction to damp the motion. The object could end up in any one of the valleys, depending on its starting point. Therefore the dynamics of the object is multistable.
++
++
The object's motion can be understood using the physical concept of energy. Because the gravitational potential energy of the object is a linear function of its height, the surface can be regarded as a graph of energy versus location in the horizontal plane. The object behaves as if its goal were to minimize its potential energy, in the sense that its downhill motion causes the potential energy to decrease until a minimum is reached. The multiple stable states correspond to the multiple minima of the energy.
++
In an influential paper published in 1982, John Hopfield constructed a mathematical function that assigns a numerical value to any activity pattern of a neural network model. He proved mathematically that this number is guaranteed to decrease as the activity of the network evolves in time until a stable state is reached. Because of this property, Hopfield's function represents the "energy of the network," and we will call it an energy function.6 The energy of the network is analogous to the height of the sliding object in Figure E–7, and the activity of the network is analogous to the horizontal location of the sliding object. Of course, Figure E–7 is an impoverished depiction of a network energy function because the activity pattern of a network of n neurons is an n -dimensional vector, not a two-dimensional location in the horizontal plane.
++
As a special case, Hopfield applied the energy function to associative memory networks, showing that the process of memory recall by pattern completion (Figure E–5) is analogous to an object sliding down an energy landscape (Figure E–7).
++
Hopfield's construction of the energy function required that the interactions between neurons be symmetric: Any connection from one neuron to another is mirrored by another connection of equal strength in the opposite direction. This is the case, for example, in the cell assemblies of Figure E–5. Although perfect symmetry of interactions is not biologically plausible, approximate symmetry might be a property of some biological neural networks, so that Hopfield's networks might be regarded as an idealization of them.
++
In 1986 Hopfield and David Tank pointed out that a neural network with an energy function can be used to perform a type of computation known as optimization. Many interesting problems in computer science can be formulated as the optimization of some kind of function. For example, in the traveling salesman problem, a salesman would like to find the shortest route by which he can visit multiple cities and return to his starting point. In this problem the function to be optimized is the length of the route. Hopfield and Tank showed how to construct a network that finds solutions to the traveling salesman problem. The energy of their network is equal to the distance of the route, which is encoded by the activity of the network. Because the network converges to a minimum of the energy, it effectively searches for an optimal solution to the traveling salesman problem.
++
This general approach was applied by many others to construct neural networks that solve a variety of optimization problems. The Hopfield-Tank approach could be viewed as an extension of the cell assembly to a general method of encoding a computational problem in the connections of a recurrent network, which solves the problem by converging to a steady state.
+++
Hebbian Plasticity May Create Sequential Synaptic Pathways
++
In the simulations in Figure E–5 it is assumed that synapses between pairs of neurons are strengthened when the two neurons are active simultaneously. If signaling flows in both directions between the neurons, the synapses in both directions will be strengthened, preserving the symmetry of the interactions between the neurons.
++
However, Hebb actually argued that a synapse is strengthened when the presynaptic neuron is activated immediately before the postsynaptic neuron (activity in the presynaptic cell leads to an excitatory postsynaptic potential that contributes to firing the postsynaptic action potential). Hebbian plasticity that depends on temporal order has been observed in spike timing-dependent plasticity (see Chapter 67). The temporal asymmetry of this learning rule can lead to synaptic connectivities that are asymmetric, as opposed to those shown in Figure E–5.
++
Such asymmetry in the connectivity could be appropriate for the storage and recall of motor sequences, needed in skills such as playing a musical instrument, and which consist of temporally structured steps. Motor sequences are presumably created by sequential activation of groups of neurons. One can imagine storing a sequence in a network by giving it extrinsic inputs that activate neurons in some order. Hebbian plasticity would lead to a set of strengthened connections that are organized like the perceptron of Figure E–1. Later on the sequence could be recalled by activating the first group of neurons, which would activate the second group, and so on. The network would generate the sequence that had been stored by extrinsic input. This would be another example of pattern completion, one in which the pattern is a temporal sequence rather than a stable state as in Figure E–5.
++
In this hypothetical example the strong connections all point in the same direction, so that the interactions in the network are asymmetric. The network is unable to generate the same sequence in the opposite order because of the asymmetry. This is consistent with the fact that many well-practiced motor sequences are difficult to carry out in reverse order.
++
It seems plausible that symmetric and asymmetric connections could be important for storing different types of associations. The memory of a telephone number is sequential and asymmetric; remembering it forward is much easier than trying to remember it backward. But other types of associations are more symmetric: A face may evoke a name as easily as a name evokes a face.
++
In associative memory networks long-term memories are stored through modifications of synaptic strengths that last for long times. But such Hebbian style long-term potentiation is just one type of modification of biological synapses (see Chapter 67). The strengths of biological synapses can change more transiently. Diverse types of transient modification—short-term facilitation, short-term depression, augmentation, and so on—have been classified by their time scales and other properties. It is natural to speculate that these different forms of synaptic alteration could be used by the brain for a whole spectrum of memory processes with different time scales. In this view a firm distinction between short-term and long-term memory is too simplistic.
++
Previously we explained that a cell assembly supports both short-term and long-term memories. Does this mean that one can only maintain short-term memories of items that have already been stored as long-term memories? Everyday experience suggests that one can briefly maintain a short-term memory of a telephone number that has never been encountered before, for which no long-term memory exists. This issue could perhaps be solved if, as suggested above, the sharp distinction between short-term and long-term memory were replaced by a spectrum of memory processes with different time scales.
++
As noted in the introduction, the idea that persistent activity is maintained by cell assemblies, or more generally by synaptic loops, dates back to Hebb and Lorente de Nó. Many researchers have developed detailed and realistic simulations based on this idea, simulations that are more convincing than the simple one shown in Figure E–5. But demonstrating empirically that a specific example of persistent activity in the brain is caused by synaptic loops has been difficult. Persistent activity could also arise as an intrinsic property of the biophysics of single neurons, rather than an emergent property of networks. Hence the biological mechanisms of persistent activity are still controversial.
+
++
++