Software makes data really sing

By Kimberly Patch, Technology Research News

Sound carries a lot of information. When you hear a loud, tinkling crash, you know a lot of energy has been dissipated in the form of glass breaking, and you probably have a vague idea of how much glass has broken and a good idea of the direction you should go to witness the aftermath.

A pair of mathematics researchers have teamed up with a music professor to develop software that exploits our audio capabilities to increase the bandwidth of the human-computer interface.

The researchers' data sonification software translates written data into 25 sound parameters, including loudness, pitch, location, panning and depth, to allow for audio recognition of patterns and changes in the data. "Data sonification is the analogue of data visualization -- you use your ears instead of your eyes ... to hear fine details of data that come out of an experiment or a computer simulation," said Hans Kaper, senior mathematician in the mathematics and computer science division at Argonne National Laboratory.

While sound is not a substitute for visualization, it is an addition that may allow users to more quickly absorb and process certain types of data. "Sound is excellent if you're looking for irregularities or transitions from irregularities to regularities," Kaper said.

The researchers have subjected the system to two science tests: They listened to data comparing the energy state of a molecule before and after a chemical reaction. "You could hear that in various locations things ... had changed. If you listened carefully you were able to pinpoint where this happened -- essentially what bond," said Kaper. The researchers were also able to pick out by sound the locations of different types of microscopic structures in a superconductor, Kaper said. For example, a modulation, or wobble, in tone represented a defective, triangular structure, he said.

The software includes equal loudness tables, which correct for a phenomenon of the human brain that dampens multiple sounds. It's well known that it takes more than two violins, for example, to sound twice as loud as one violin, said Kaper. In addition, the number of instruments it takes to sound twice as loud as one depends on pitch -- the lower the note, the more instruments you need. This made mapping loudness more complicated than simply doubling the amplitude of a sound to, for instance, represent the temperature of a chemical reaction doubling.

Although the researchers have proved the concept, the question is whether people are willing to listen.

Alex Weibel, a principal research scientist at Carnegie-Mellon University, cautioned that sound is difficult for someone who is untrained to hear. "Sound schemes would have to be learned," he said.

The researchers agree. "As we found out, its difficult to hear what's going on in a sound if you haven't been trained. People aren't used to using their ears," Kaper said. To help integrate the data with visuals and also to allow people to pinpoint something without going over and over the sound, the researchers added a visual representation of the sound to the software.

They used the Cave Automated Virtual Environment Simulator (CAVE) software to map the 25 audio attributes to 25 visual attributes of spheres, which represent sounds. For example, loudness could determine a sphere' s size, reverberation its color, and modulation its rotation.

Data sonification is likely to be useful in situations that involve large amounts of data, like computational simulations, or seismic information. "If you do numerical simulations, you get overwhelmed by the data. Any way that can help you detect subtle changes in your data [is] useful," said Kaper.

Eventually the technology could be applied to situations where a person's eyes are busy, like surgery, Kaper said. "If you can translate vital data into sound... the surgeon can hear the sound while he is operating."

The researchers will likely make the software freely available on the Web near the end of the year, said Kaper. "We demonstrated the principle and right now we're working on a better version" of the software, he said.

Kaper's research colleagues are Sever Tipei, professor of music theory and composition at the University Of Illinois' School of Music and Elizabeth Weibel, who is currently a graduate student at the College of William and Mary.

The project is a joint collaboration between the Mathematics and Computer Science Division of the Argonne National Laboratory and the University of Illinois at Urbana-Champaign’s Computer Music Project, and is funded by both entities.

Timeline:   < 1 year
Funding:   Government; University
TRN Categories:   Data Representation; Human-Computer Interaction
Story Type:   News
Related Elements:   Technical paper "Data Sonification and Sound Visualization" posted in the Computing Research Repository; Technical paper "Sonification Report: Status of the Field and Research Agenda" posted on the International Community for Auditory Display site; Sound clips




Advertisements:



July 19, 2000

Page One

Hearing between the lines

Search tool finds answers before queries

Scatter could boost fiber capacity

Software makes data really sing

Magnetic microscope recovers damaged data

News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.