Physics tackles processor problem

By Kimberly Patch, Technology Research News

Knowing ahead of time the difficulty of a given task is useful information that could change the way you focus your effort.

Researchers from Otto-von-Guericke University in Germany have devised a difficulty-prediction scheme that could lead to more efficient parallel computing. Parallel computers speed work on certain very large problems by breaking them into smaller tasks and parceling the pieces out to different computer processors.

Distributing and coordinating these smaller tasks so that the problem is solved as quickly as possible is the classic challenge known as the multiprocessor scheduling problem. It is one of a class of problems whose possible solutions grow exponentially in relation to the number of variables -- in this case the number of tasks.

In a practical sense, "this means that there is no way to find an optimum schedule for more than, say, 100 tasks," said Stephen Mertens, an assistant professor of theoretical physics at Otto-von-Guericke University.

The researchers used a classic theoretical physics technique to map the multiprocessor scheduling problem in order to pin down the precise critical boundary that separates easy from hard problems. This boundary marks a phase transition, similar to the temperature boundary between water and ice.

The central idea was to map the problem for "N" number of processors as a random walk on an N-dimensional lattice, said Mertens. A random walk is akin to distributing people across a grid of streets by having them randomly turn whenever they come to an intersection.

Averages over random sets of problems can capture the typical, and therefore relevant, features of an optimization problem, said Mertens. The researchers ran the candidate multiprocessor scheduling problems for a random amount of time, and then looked at the problems' typical properties.

They found that two distinctly different phases characterized harder versus easier problems. It boiled down to volatility. The difference between the two types of problems is generally the variation in the running times of all the tasks, said Mertens. "If they vary a lot, you're in the hard phase. If they do not vary that much, you're in the easy phase." This makes sense because problems with comparable running times are easier to schedule than problems with more variable running times, he said.

Hard phase problems usually required exponential running time, said Mertens. "The worst-case is predominant," said Mertens. In the easy phase, however, the running time usually grew at the same rate as the number of processors rather than exponentially, he said. In this phase the worst-case possibility of exponential running time growth "seems to be an exotic exception."

The method pinpoints the transition between the phases, making it easier to identify where to expect hard tasks and where to expect easy ones, and most practical problems turn out to be easy ones, said Mertens. "Our method may prove useful [for finding] faster algorithms for the easy phase," he said. The method offers only a little practical help in finding an optimal schedule, however, he added.

In general, it is difficult to map computing problems onto physics-based models, said David Nicol, a professor of computer science at Dartmouth College. The difficult part is keeping the details intact, he said. Often "some important detail gets glossed over in the effort to make the mapping work," he said.

If the Otto-von-Guericke method turns out to be useful, it could be applied to practical systems within three years, said Mertens.

The researchers' ultimate goal is to completely characterize the hard phase in order to learn why these problems are so extremely difficult and why algorithms that use approximations work so badly on this type of problem, Mertens said.

Guericke's research colleagues were Heiko Bauke and Andreas Engle of Otto-von-Guericke University. The work appeared in the April 17, 2003 issue of Physical Review Letters. The research was funded by Otto-von-Guericke University and the German National Science Foundation.

Timeline:   > 3 years
Funding:   Government, University
TRN Categories:  Parallel Systems and Supercomputing; Physics
Story Type:   News
Related Elements:  Technical paper, "Phase Transition in Multiprocessor Scheduling," Physical Review Letters, April 17, 2003.




Advertisements:



November 19/26, 2003

Page One

Segway robot opens doors

Jolts turn liquid to solid

Switch promises optical chips

Physics tackles processor problem

Briefs:
Molecular memory is electric
Liquid crystal tunes fiber
Nanotubes fortify plastic film
Plastic display circuit shines
Model leverages nano tethers
Stamp forms organic laser

News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.