Neurons
battle to a draw
By
Kimberly Patch,
Technology Research News
Locusts -- those flying grasshoppers that
periodically cause farmers distress by collecting into swarms and binging
on whole fields of crops -- learn a lot about their environments from
the olfactory information gathered by their antennas.
A group of scientists who are trying to understand exactly how smells
picked up by Locust antennas are translated into neural signals have taken
a step toward understanding how neurons can store as much information
as they do.
Nature's advantage comes from the way neurons interact. In their winnerless
competition, no neuron gains an advantage over any other, which keeps
the system as a whole coordinated in the face of random signals, or noise.
The work promises to increase by several orders of magnitude the amount
of information that artificial neural networks
can work with, and the potential number of smells artificial noses can
distinguish. Artificial neural networks underpin computer
vision systems that recognize objects and faces and pattern
recognition software that sorts large amounts of scientific and financial
data.
Neurons work together in much the same way that a small number of letters
can form many different words, said Mikhail Rabinovich, a research scientist
at the University of California at San Diego. Rather than one or a group
of neurons simply representing a single smell, a group of neurons can
represent many smells depending on the order in which they're fired, he
said. "The sequences of the activity of these neurons is important."
Label three neurons blue, red and green. Instead of each neuron representing
a different smell for a total of three possible smells, they can work
together to represent 12 different smells. Neurons firing in the order
blue, red, green, for instance, could represent a rose smell, and the
same neurons fired in another sequence could represent a salmon smell,
Rabinovich said. Here's the math for three digit combinations of three
separate letters or neurons: 3 x 2 x 1 = 6, and then x 2 again to take
into account the combinations that can be represented by repeating digits
= 12.
The true advantage of such a system becomes apparent when you do the math
for larger strings of letters or neurons. As small a number as 10 different
neurons can represent 7.25 million unique 10-digit combinations. (Here's
the math: 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x 2 = 7.25 million.)
By using the timing, or order in which neurons switch, the neural network
gains "a huge capacity," Rabinovich said.
The system is also robust, meaning it cannot be easily thrown off, because
this structure is inherently self-correcting, said Rabinovich. Neurons
are constantly switching, or oscillating on and off at about 1 hertz,
or cycle per second. Random signals, or noise can throw the system off,
but because the system inherently dissipates the random signals, it stays
on track.
Neural networks are able to do this through stimulus-dependent winnerless
competition, said Rabinovich.
The same winnerless competition principle works to balance animal populations
that are competing for a territory or food. For example, if species A
eats species B, species B eats species C, and species C eats species A,
the three populations tend to stay balanced. This is because if species
B gets ahead in eating species A, there will be higher numbers of species
C around to pare down species B, which will in turn allow species A to
rebound.
Neurons work the same way because when a neuron fires, it can inhibit
the firing of another neuron. This sets up a competition among neurons,
which works to keep their timing coordinated in a way that can represent
useful information, said Rabinovich. Though the principle of winnerless
competition is not new, it is new to show that such dynamics can represent
the sensory input that causes neurons to cycle, he said.
Eventually scientists can use this information to build better artificial
neural networks, whose abilities have historically fallen far short of
those of the biological kind. "Maybe we can use this idea to build artificial
[systems] that demonstrate the same abilities -- robustness against noise,
huge capacity, reproduceability and sensitivity," said Rabinovich.
One possibility is building an olfactory sensor -- an artificial smart
nose "that is able to represent and recognize a huge number of different
stimuli," he said. It can also be used to "organize brains which are able
to control the behavior of robots in complex environments," said Rabinovich.
The work is "very impressive," said Sylvian Ray, a professor of computer
science and electrical and computer engineering at the University of Illinois.
"It appears to add another dimension to neural network architectures,
which potentially permit a huge increase in the number of states... representable
by a modest size collection of neurons," said Ray. What's new are the
specific equations to describe the kinetics, and the correlation between
the mathematical model and the neural activity in the Locust antenna lobe,
he said.
The work may help solve the mystery of why biological neural networks
can do so much more than artificial networks that try to copy their structures,
Ray added. "There is a possibility that this idea is a clue to the way
in which biological networks can represent such an astounding number of
states per neuron," he said.
It will be a least five years before the work can be applied practically,
said Rabinovich.
Rabinovich's research colleagues were Aleksandr Volkovskii from the University
of California at San Diego, P. Lecanda from the Madrid Autonoma University
in Spain and the Madrid Institute of Materials Science in Spain, Ramon
Huerta from UC San Diego and the Madrid Autonona University, Henry Abarbanel
from UC San Diego and the Scripps Institution of Oceanography, and Gilles
Laurent from the California Institute of Technology.
They published the research in the August 6, 2001 issue of the journal
Physical Review Letters. The research was funded by the Department of
Energy (DOE), the National Science Foundation (NSF) and the National Institutes
of Health (NIH).
Timeline: 5 years
Funding: Government
TRN Categories: Neural Networks
Story Type: News
Related Elements: Technical paper, "Dynamical Encoding by
Networks of Competing Neuron Groups: Winnerless Competition," Physical
Review Letters, August 6, 2001.
Advertisements:
|
October
3, 2001
Page
One
Neurons battle to a draw
Quantum crypto gear shrinks
Toy shows bare bones
of walking
Tiny jaws snatch cells
Plastic mix helps
shrink circuits
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|