There’s a veritable electrical storm going on inside your head: 100 billion brain cells firing electrical signals at one another are responsible for your every thought and action.
A neuron, a.k.a. nerve cell or brain cell, is a specialized cell that sends an electrical impulse out along fibers connecting it, in turn, to other neurons. These guys are the wires of your very own personal circuitry.
What follows is a simplistic description of the general features of nerve cells, whether they are found sending signals from your senses to your brain, from your brain to your muscles, or to and from other nerve cells. It’s this last class, the kind that people most likely mean when they say “neurons,” that we are most interested in here. (All nerve cells, however, share a common basic design.)
Note
Don’t for a second think that the general structure we’re describing here is the end of the story. The elegance and complexity of neuron design is staggering, a complex interplay of structure and noise; of electricity, chemistry, and biology; of spatial and dynamic interactions that result in the kind of information processing that cannot be defined using simple rules. 1 For just a glimpse at the complexity of neuron structure, you may want to start with this free chapter on nerve cells from the textbook Molecular Cell Biology by Harvey Lodish, Arnold Berk, Lawrence S. Zipursky, Paul Matsudaira, David Baltimore, and James Darnell and published by W. H. Freeman ( http://www.ncbi.nlm.nih.gov/books/bv.fcgi?call=bv.View..ShowSection&rid=mcb.chapter.6074 ), but any advanced cell biology or neuroscience textbook will do to give you an idea of what you’re missing here.
The neuron is made up of a cell body with long offshoots—these can be very long (the whole length of the neck, for some neurons in the giraffe, for example) or very short (i.e., reaching only to the neighboring cell, scant millimeters away). Signals pass only one way along a neuron. The offshoots receiving incoming transmissions are called dendrites. The outgoing end, which is typically longer, is called the axon. In most cases there’s only one, long, axon, which branches at the tip as it connects to other neurons—up to 10,000 of them. The junction where the axon of one cell meets the dendrites of another is called the synapse. Chemicals, called neurotransmitters, are used to get the signal across the synaptic gap. Each neuron will release only one kind of neurotransmitter, although it may have receptors for many different kinds. The arrival of the electric signal at the end of the axon triggers the release of stores of the neurotransmitter that move across the gap (it’s very small, after all) and bind to receptor sites on the other side, places on the neuron that are tuned to join with this specific type of chemical.
Whereas the signal between neurons uses neurotransmitters, internally it’s electrical. The electrical signal is sent along the neuron in the form of an action potential. 2 This is what we mean when we say impulses, signals, spikes, or refer, in brain imaging speak, to the firing or lighting up of brain areas (because this is what activity looks like on the pictures that are made). Action potentials are the fundamental unit of information in the brain, the universal currency of the neural market.
The two most important computational features are as follows:
They are binary. A neuron either fires or doesn’t, and each time it fires, the signal is the same size (there’s more on this later). Binary signals stop the message from becoming diluted as neurons communicate with one another over distances that are massive compared to the molecular scale on which they operate.
Neurons encode information in the rate at which they send signals, not in the size of the signals they send. The signals are always the same size, information encoded in the frequency at which signals are sent. A stronger signal is indicated by a higher frequency of spikes, not larger single spikes. This is called rate coding.
Together these two features mean that the real language of the brain is not just a matter of spikes (signals sent by neurons), but spikes in time.
Whether or not a new spike, or impulse, is generated by the postsynaptic neuron (the one on the receiving side of the synapse) is affected by the following interwoven factors:
The amount of neurotransmitter released
The interaction with other neurotransmitters released by other neurons
How near they are and how close together in space and time
In what order they release their neurotransmitters
All of this short-term information is affected by any previous history of interaction between these two neurons—times one has caused the other to fire and when they have both fired at the same time for independent reasons—and slightly adjusts the probability of interaction happening again. 3
Note
Spikes happen pretty often: up to once every 2 milliseconds at the maximum rate of the fastest-firing cells (in the auditory system; see Chapter 4 for more on that). Although the average rate of firing is responsive to the information being represented and transmitted in the brain, the actual timing of individual spikes is unpredictable. The brain seems to have evolved an internal communication system that has noise added to only one aspect of the information it transmits—the timing, but not the size of the signals transmitted. Noise is a property of any biological system, so it’s not surprising that it persists even in our most complex organ. It could very well also be the case that the noise [[Hack #33]] is playing some useful role in the information processing the brain does.
After the neurotransmitter has carried (or not carried, as the case may be) the signal across the synaptic gap, it’s then broken down by specialized enzymes and reabsorbed to be released again when the next signal comes along. Many drugs work by affecting the rate and quantity of particular neurotransmitters released and the speed at which they are broken down and reabsorbed.
Hacks such as [Hack #11] and [Hack #26] show some of the other consequences for psychology of using neurons to do the work. Two good introductions to how neurons combine on a large scale can be found at http://www.foresight.gov.uk . This is a British government Department of Trade and Industry project that aimed to get neuroscientists and computer scientists to collaborate in producing reviews of recent advances in their fields and summarize the implications for the development of artificial cognitive systems.
Gurney, K. N. (2001). Information processing in dendrites II. Information theoretic complexity. Neural Networks, 14, 1005–1022.
You can start finding out details of the delicate electrochemical dance that allows the transmission of these binary electrical signals on the Neuroscience for Kids site ( http://faculty.washington.edu/chudler/ap.html ), and The Brain from Top to Bottom project ( http://www.thebrain.mcgill.ca/flash/a/a_01/a_01_m/a_01_m_fon/a_01_m_fon.html ).
How neurons are born, develop, and die is another interesting story and one that we’re not covering here. These notes from the National Institutes of Health are a good introduction: http://www.ninds.nih.gov/disorders/brain_basics/ninds_neuron.htm.
Neurons actually make up less than a tenth of the cells in the brain. The other 90–98%, by number, are glial cells, which are involved in development and maintenance—the sysadmins of the brain. Recent research also suggests that they play more of a role in information processing than was previously thought. You can read about this in the cover story from the April 2004 edition of Scientific American (volume 290 #4), “The Other Half of the Brain.”
Get Mind Hacks now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.