Models for Neural Spike Computation and Cognition
Venue
CreateSpace, Seattle, WA (2011), pp. 142
Publication Year
2011
Authors
David H. Staelin, Carl H. Staelin
BibTeX
Abstract
This monograph addresses the intertwined mathematical, neurological, and cognitive
mysteries of the brain. It first evaluates the mathematical performance limits of
simple spiking neuron models that both learn and later recognize complex spike
excitation patterns in less than one second without using training signals unique
to each pattern. Simulations validate these models, while theoretical expressions
validate their simpler performance parameters. These single-neuron models are then
qualitatively related to the training and performance of multi-layer neural
networks that may have significant feedback. The advantages of feedback are then
qualitatively explained and related to a model for cognition. This model is then
compared to observed mild hallucinations that arguably include accelerated
time-reversed video memories. The learning mechanism for these binary
threshold-firing “cognon” neurons is spike-timing-dependent plasticity (STDP) that
depends only on whether the spike excitation pattern presented to a given single
“learning-ready” neuron within a period of milliseconds causes that neuron to fire
or “spike”. The “false-alarm” probability that a trained neuron will fire for a
random unlearned pattern can be made almost arbitrarily low by reducing the number
of patterns learned by each neuron. Models that use and that do not use spike
timing within patterns are evaluated. A Shannon mutual information metric
(recoverable bits/neuron) is derived for binary neuron models that are
characterized only by their probability of learning a random input excitation
pattern presented to that neuron during learning readiness, and by their
false-alarm probability for random unlearned patterns. Based on simulations, the
upper bounds to recoverable information are ~0.1 bits per neuron for optimized
neuron parameters and training. This information metric assumes that: 1) each
neural spike indicates only that the responsible neuron input excitation pattern (a
pattern lasts less than the time between consecutive patterns, say 30 milliseconds)
had probably been seen earlier while that neuron was “learning ready”, and 2)
information is stored in the binary synapse strengths. This focus on recallable
learned information differs from most prior metrics such as pattern classification
performance and metrics relying on pattern-specific training signals other than the
normal input spikes. This metric also shows that neuron models can recall useful
Shannon information only if their probability of firing randomly is lowered between
learning and recall. Also discussed are: 1) how rich feedback might permit improved
noise immunity, learning and recognition of pattern sequences, compression of data,
associative or content-addressable memory, and development of communications links
through white matter, 2) extensions of cognon models that use spike timing,
dendrite compartments, and new learning mechanisms in addition to spike timing-
dependent plasticity (STDP), 3) simulations that show how simple optimized neuron
models can have optimum numbers of binary synapses in the range between 200 and
10,000, depending on neural parameters, and 4) simulation results for parameters
like the average bits/spike, bits/neuron/second, maximum number of learnable
patterns, optimum ratios between the strengths of weak and strong synapses, and
probabilities of false alarms.
