© Charles Chandler
Our understanding of the human brain has made great strides recently, thanks to studies of artificial neural nets. We now know that the brain uses massively parallel distributed processing (MPDP) to concurrently evaluate broadband inputs, and to generate multiple concurrent outputs. Artificial MPDP nets have been created that seem to process information in the same way as brains — once trained, they can generate correct responses to well known inputs with a minimum of processing, and their performance degrades gracefully with novelty. The training is accomplished with error back-propagation algorithms. These compare the trial output to the desired output, and monitor the discrepancies between the two while they re-weight each synapse, until all of the discrepancies have been eliminated.
While error back-prop works nicely for training artificial nets, its applicability to natural learning isn't so easy, because it requires that there be something more intelligent than the neural net being trained, to alter synaptic weights and evaluate the results. For this to occur in vivo, the brain would have to already know the correct response to every input pattern, such that it could re-weight its own synapses in the interest of high-performance MPDP responses. But this would beg the question of how the brain learned the correct response in the first place.
Quite obviously, the environment can inform us of discrepancies between trial outputs and desired outputs, with the introduction of biologic insult. But the environment has no way of weighting specific synapses to train the net — the sensations that ultimately will prove useful in recognizing the conditions that call for a particular response are frequently very different from the sensations that will result from an incorrect response. For example, seeing an incoming volleyball calls for one to get one's hands into the trajectory, to send the ball back the other way, but failure to do so might result in getting hit on the head — how does that teach somebody to raise one's hands? It's easy to assume the conclusion, that the pain of getting hit on the head punishes the player, while a successful volley is the reward that cements the learned response. But if we trace the neural pathways, we can't find the direct connections among the visual information, the pain, and the correct motor control linkages. In fact, the painful stimuli should ingrain the wrong response — any pain should enhance the present activity pattern, which is the one that brought on the pain.
It's possible that we should stop focusing on how the brain learns
, and instead, take a closer look at the circumstances in which it forgets
Selective information loss might then leave the learning in place by default, instead of on purpose.
Consider the well-known phenomenon that people who have been through a traumatic event, such as a car accident, frequently experience mild retrograde amnesia, and cannot remember details from just before the event. From a Functionalist standpoint we might consider this to be odd. The brain's job is to prevent injury to the body, so it naturally craves information related to injuries in the interest of avoiding them.
So is this amnesia an evolutionary oversight?
It's possible that it isn't an oversight, but just an unfortunate side-effect of a more important process. A traumatic event causes the release of adrenalin, which, in addition to facilitating enhanced muscular responses, also constricts blood capillaries, which reduces blood loss in the event of injury. Both of these effects are important in life-threatening events. But the constriction of blood capillaries might also have the unintended side-effect of depriving the brain of the nutrients that it needs to transform present neural activity patterns into long-term memory, hence the retrograde amnesia. So we lose the information that we could have (or should have?) retained.
Yet there is another interpretation that can be considered. It's possible that this is not an unintended side-effect, but rather, an integral part of the design of our brains. What if this is Mother Nature's way of weighting synapses such that we learn the right way to do things?
Back to the Functionalist view, if Mother Nature is teaching us, how is she doing it? We instantly answer that she rewards us when we are good, and punishes us when we are bad. This looks like it will be easy to translate into physiological terms. Rewards are pleasurable sensations, and punishments are painful ones, where "pain" is easy to define simply as an overload of one or more senses.
But the translation from physiological terms down to the neural instantiation is not so easy. When Mother Nature punishes us, how does she isolate the offending synapses? Overloading the active connections isn't going to work — that would make those connections stronger, and we would be more likely to display the same behavior pattern in the future. Mechanistically speaking, without there being any aspect of the brain that "knows what to do" when any particular type of information is sent into it, it's tough to define how we seek pleasure and avoid pain.
Now we should consider the possibility that retrograde amnesia is the instrument of learning. This is Mother Nature's way of saying, "That neural activity pattern led to injury, so just forget the whole thing." So the teacher doesn't punish students for getting wrong answers — she simply erases the answers, and tells the students to keep trying until they get it right. In other words, knowing how to wreck a car isn't useful information, so it's better if such knowledge is simply forgotten.
This establishes how we might fail to retain activity patterns that led to bad consequences. But how do we remember thoughts that were successful?
It's possible that we remember all thoughts by default, and it is merely those that were immediately succeeded by traumatic events that we forget.
In time, we "learn" to seek pleasure and avoid pain. But in the mechanical sense, we don't do this on purpose. Analogously, if we were to toss small objects onto the highway, we might observe that they seem to seek resting positions in the nearest ditch. But that's just because objects still on the highway continue to get run over, causing random shifts in position. Only if by chance they land in the ditch do they no longer get run over. Such objects don't seek the ditches on purpose — this is just a generalization of their behavior that inserts intentionality into a mechanical process.
It's also possible that the event doesn't actually have to be traumatic, to the point that adrenalin is released, in order to disrupt the transition of activity patterns to long-term memories.
A simple study of short-term memory was once done, which illustrated this point. People participating in the study were given sequences of numbers to memorize, and were asked to write down the numbers after 30 seconds. And in 50% of the trials, a bell was rung after 15 seconds. In those trials, successful recollection was substantially lower. Clearly the sound of the bell disrupted the activity pattern in short-term memory. But this might be more than just a quirk of mnemonics — it might be the crucial capability of the brain to retain information that did not lead to unexpected consequences, and to forfeit anything that did, thereby "learning" the correct ways of doing things.
In the most basic sense, it is the job of the brain to protect the body from injury. For this purpose, the body has been wired up with all kinds of sensors to detect biological insult. The brain is then left to engage in whatever activity it wants, so long as it does not result in one of the sensors being triggered. If that happens, the current activity pattern is disrupted, and therefore is not migrated to long-term memory.
Learning and Forgetting
Let's consider how this mechanism might work in the real world. For example, consider a person walking through the woods. For the sake of simplicity, let's think of this person as having no existing knowledge of any kind, with a perfectly empty brain, but with a generic ability to learn, and with a hard-wired ambulatory behavior that has been activated.
It will only be a matter of time before the person will run into a tree. This will result in biologic insult to one or more parts of the body. This means that the activity pattern involving seeing a tree take up more and more field of view, combined with the proprioceptive information generated by continuing to walk towards it, will not transition to long-term memory.
After bouncing off of the tree and walking away in another direction, he/she will run into another tree, with the same effect. This process could be repeated an indeterminate number of times.
Eventually, by chance the person might approach another tree, but might slip on a rock and regain balance facing in a slightly different direction, in which case the tree will be spared the sudden impact of a body driven by an empty brain. Now this activity pattern will be retained, because biologic insult did not disrupt it.
The next time the person approaches a tree, he/she will then turn slightly, even without a rock to trip on, and thereby avoid hitting the tree. This is not because he/she "knows" that this is the best thing to do, or that pain would result if he/she did not. Rather, the person simply engages in the behavior dictated by the most robust stored activity pattern, given those stimuli.
Note that this constitutes one-trial learning. All that is required for learning to occur is for the pattern to somehow become active, and for biologic insult not to follow. In essence, the thesis is that all trials result in learning, except the ones that failed.
For this mechanism to be instantiated in vivo, it is merely necessary for there to be some sort of delay in the formation of long-term memories. In other words, whatever you're thinking right now, if you're still thinking it two minutes from now, your current thoughts can start the migration to long-term memory.
There are any number of mechanisms that could do this, and many researchers believe that there are a number of different types of long-term memory, all with different biological instantiations, and all operating on different time scales. Dilation of blood capillaries, build-up of nutrients within the cell bodies because of demand from the axons, increased amounts of nutrients within the axons because of demand from the synapses, and increased capability for nitric oxide production have all been cited as mechanisms for instantiating long-term memory. All of these mechanisms can easily be seen as sensitive to the amount of time an activity pattern was present.