Robots learn soft touch

By Kimberly Patch, Technology Research News

Learning to bounce a ball on a tennis racket may seem easy, but when you dig into the details of the process in order to, say, build a robot that can learn to do it too, it becomes obvious that the task is fairly complicated.

The mathematics involved in bouncing a ball on racket are nonlinear, meaning the problem cannot be solved simply. The bouncing ball is actually a chaotic system that has a lot in common with complicated systems like liquid flow and weather.

It doesn't take a mathematician to learn the task, however. Give a human instructions to bounce a ball on a racket and before long the human will be controlling the ball in an efficient, stable way without having to know why.

A team of researchers from Pennsylvania State University, the University of Southern California, and the University of São Paulo are looking into the principles that allow humans to so easily control complicated systems. These principles could eventually lead to more independent robots and better artificial limbs.

According to the researchers, the secret to the particular task of bouncing a ball is to slightly slow down the upward motion of the racket just before it hits the ball.

Humans do this instinctively even though it would be a little more efficient to hit the ball when the racket is moving upward at its greatest velocity, or speed. Slowing the racket down, however, makes the system more stable, which in turn makes it easier to control.

"The acceleration of the racket needs to be within a range of values in order to ensure dynamic stability that then does not require explicit correction," said Dagmar Sternad, an assistant professor of kinesiology at Pennsylvania State University.

Sternad likens the more stable system to a ball resting at the bottom of a funnel. If the ball is disturbed it will tend to fall back to the stable point at the bottom of the funnel.

Although conventional wisdom says vision is our dominant sense, the research showed that people find the right way to bounce a ball on a tennis racket largely by feel, said Sternad. "Conventionally we always look at visual information and think that that is the dominant source of information. In this case it is not. Dynamic stability is better exploited when we have haptic, or kinesthetic information as opposed to visual information," she said.

Following the dynamic stability principal allows the human body to do a task more efficiently than achieving it using feedback control, where the brain and muscles communicate via the nervous system and the brain directs every muscle movement based on what happened the instant before.

The dynamic stability principal is a useful discovery, said Andy Ruina, professor of theoretical and applied mechanics, and mechanical and aerospace engineering at Cornell University. "[It] shows her that people do actually use these no-feedback mechanisms, even while they do use feedback. It seems from their experiments that people use a higher level feedback -- learning -- to find motions that require less explicit short-term feedback," he said.

Although the most common way to program a robot is to use feedback to control all its movements in two-millisecond increments, an approach growing in popularity and supported by the research is to "only provide control over things you care about and not worry about tracking things in time in detail," said Ruina. "Who cares, for example, where [the] racket is when it is not in contact with [the] ball," he said.

Sternad's research partner Stefan Schaal is taking the research in that direction. "Our key interest is really what algorithms the brain uses [to create] human motor control... and how this can help to make artificial systems more intelligent," said Schaal, assistant professor of computer science and neuroscience at the University of Southern California. "People have looked at [bouncing a ball on racket] before in robotics and they have... developed really complicated control systems which proved to be stable... but they were very inefficient in comparison to the one we found," he said.

This is because controlling a robot using feedback is a large task when the movement is nonlinear and the possibilities nearly endless. Using the dynamic stability principal opens up "a totally different way of programming. It turns out that in these types of systems the environment can drive your movements -- it becomes a little bit like a reflex," Schaal said.

Using the dynamic stability principal he and Sternad extracted from human behavior, Schaal has given his humanoid robots the ability to more efficiently "bounce balls, juggle balls [and] synchronize drumming to an external drummer,” he said.

The researchers are ultimately looking to identify the underlying mechanisms that allow humans to find and use many basic movement principles, said Schaal. "We believe that human movements [in general] are very simple building blocks which can be described mathematically and if you put them together you can build very complicated movements," he said.

Schaal likened the process to a conductor directing every person in an orchestra, but relying on individual players to know the details of playing their instruments.

Understanding the way humans achieve complicated movements will eventually allow machines to "become more autonomous and create their own movements -- basically become more humanlike," he said.

The same principles could also eventually lead to better artificial limbs, said Schaal. "You might even be able to use it in neural prosthetics [where] an impaired arm might be revived by interfacing computers to the musculature and then creating natural movement based on this kind of theory," he said.

The researchers are currently setting up a virtual environment in order to study what happens when people get conflicting kinesthetic and visual information. That environment will also allow them to more closely examine how people deal with changes in the system, said Sternad. "We want to... see how we deal with perturbation and how does it correct itself," she said.

They are also looking into exactly how the human muscle system carries out the principles. This is a difficult problem because the muscle system can carry out a given task in many different ways, said Sternad. "We're looking at the organization of the arm -- that is how do our arm, shoulder, elbow and wrist joints and [the] many muscles that move [them] get organized in their infinitely many possibilities in order to obtain this particular variable,” she said.

Sternad's and Schaal's research colleagues were Marcos Duarte of the University of São Paulo and Hiromu Katsumata of Pennsylvania State University. They published the research in the January, 2001 issue of Physical Review E. The research was funded by the National Science Foundation (NSF).

Timeline:   Now
Funding:   Government
TRN Categories:   Chaotic Systems, Fuzzy Logic and Probabilistic Reasoning; Robotics
Story Type:   News
Related Elements:  Technical paper, "Dynamics of a Bouncing Ball in Human Performance," Physical Review E, January, 2001.




Advertisements:



February 28, 2001

Page One

Robots learn soft touch

Evolution breeds cooperation

Chip promises brighter wearable displays

Light source brightens prospects for security

Bound bits could bring bigger disks







News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.