The role of single neurons in information processing

Christof Koch, Idan Segev
2000 Nature Neuroscience  
Neurons as point-like, linear threshold units In 1943, McCullough and Pitts 1 showed how a collection of simple, interconnected neuron-like units could process information. For reasons of analytical tractability, their view of neuronal processing is a stark, simple one. All synaptic inputs converge onto a single compartment ('point neuron'). Each synapse is modeled by a positive number, its synaptic weight. The activity of each presynaptic fiber (originally assumed to be either on or off) is
more » ... tiplied by its associated synaptic weight and summed over all inputs. This sum is then compared against a threshold. If the threshold is exceeded, and if no inhibitory unit is active, the neuron generates a spike and sends it on to its postsynaptic targets. Otherwise, the cell remains quiet. McCullough and Pitts proved that a sufficiently large number of these simple logical devices, wired together in an appropriate manner, are capable of universal computation. That is, a network of such 'linear threshold' units with the appropriate synaptic weights can perform any computation that a digital computer can, though not as rapidly or as conveniently. Linear threshold model 'neurons' come in many flavors. The earliest originated in the early years of the 20 th century, far before the biophysics of action potentials was understood, as 'integrateand-fire' neurons. The state of the neuron is given by the voltage across a capacitance, with each synaptic input adding to or subtracting from the charge accumulating across the membrane (Fig. 1a) . The voltage trajectory executes a random walk, depending on the nature of the synaptic input, until a fixed voltage threshold is reached. At this time, a unit pulse is generated, and the voltage is reset, that is, all charge is instantaneously removed from the capacitance. The output of this integrate-and-fire neuron consists of a train of asynchronous pulses. In a 'leaky' integrate-and-fire unit, an ohmic resistance is added in parallel to the capacitance, accounting for the loss of synaptic charge via the resistor and, consequently, the decay of the synaptic input with time (Fig. 1b) . In a 'rate neuron', the discrete output pulses are replaced by a continuous activation function, g(V), that increases monotonically as a function of the activity, V. The stronger the excitatory input, the higher the output rate, f, of the neuron (Fig. 1c) . The activation function g(V) is sometimes identified with the cell's frequency-current relationship (f-I curve). Conceptually, such a graded neuron encodes the inverse of the interspike interval of a population of spiking cells; that is, its activity represents the average firing frequency or rate of this group. Common to these single cell models and their close relatives studied by neural network researchers 2 is, first, linear preprocessing of synaptic inputs-implying that inputs do not interact with each other in any 'interesting' way-and, second, a threshold computation (Fig. 1d) . The computational power of these networks resides in the nonlinearity provided by the threshold. This is related to a logical AND operation: the threshold can be adjusted such that the cell will fire only if two inputs are simultaneously active. Put enough such units together and anything that is computable can be computed by such a network. Networks containing hundreds or thousands of such units that utterly neglect the geometry of real neurons are commonly used in pattern recognition (for example, to predict credit card fraud) and at most brokerage houses today. Passive dendritic trees enhance computational power If neurons can be reduced to a single compartment, why aren't all neurons spherical? We still do not fully understand the diversity of dendritic morphology in terms of its functional consequences (Fig. 2) . It is likely that the sizes of the axonal and dendritic trees relate to wiring economy, that is, the principle that because space is at a premium, wiring length must be kept to a minimum 3 . Another constraint surely must be the cell's ability to receive input from specific recipient zones (for example, reaching all the way into superficial layers). Yet it is unclear to what extent these considerations explain the strikingly different dendritic morphologies (Fig. 2) , once size is accounted for. A lack of theoretical concepts as well as experimental tools for investigating dendrites led-with a few exceptions 4 -to their relative neglect for most of the 1950s and 1960s. A new area of dendritic physiology was ushered in by the widespread adoption of intracellular recordings and brain slices 5 and by the development of the linear cable theory of dendrites by Rall 6 . Linear cable theory treats dendrites as core-conductor cables, surrounded by a passive membrane modeled by an ohmic conductance in parallel with a capacitance. When synaptic input is applied, such a cable acts as a low-pass filter, removing high temporal frequencies from the volt-Neurons carry out the many operations that extract meaningful information from sensory receptor arrays at the organism's periphery and translate these into action, imagery and memory. Within today's dominant computational paradigm, these operations, involving synapses, membrane ionic channels and changes in membrane potential, are thought of as steps in an algorithm or as computations. The role of neurons in these computations has evolved conceptually from that of a simple integrator of synaptic inputs until a threshold is reached and an output pulse is initiated, to a much more sophisticated processor with mixed analog-digital logic and highly adaptive synaptic elements.
doi:10.1038/81444 pmid:11127834 fatcat:wpsetsmslfbf7cfyorjzv2qww4