Virtual mic carries concert hall sound over 'net

By Kimberly Patch, Technology Research News

Today's best audio systems use multiple channels to surround listeners with sound. It sure sounds great, but has some practical drawbacks.

First, in order to get true surround sound, rather than just the same sound coming at you from several directions, the original recording process must use a separate microphone for each channel. Second, if you want to stream all those channels over the Internet, you need a whole lot of bandwidth -- about a megabyte per second for each channel of uncompressed sound, or 200 kilobytes per second compressed.

Researchers from the University of Southern California (USC) have developed a filtering system that addresses both problems and also allows older recordings to be recast as multichannel sound.

The Virtual Microphone technology allows the researchers to map a concert hall once, recording sound from 10 or 20 microphones set around the hall, then adjust a recording to match what it would have sounded like recorded through the microphones.

"We've taken 1948 recordings and converted them and it's pretty amazing, the hall opens up around you," said Chris Kyriakakis, assistant professor of electrical engineering at USC.

The problem sounds easier than it is. "If you're trying to model what happens to the acoustics inside a large hall, the problem is that you can't do it for all possible frequencies and wavelengths, it's too complicated," said Kyriakakis.

Instead, the researchers used reference measurements, a signal processing algorithm, and some key intervention with the human ear to find the differences between the sound recorded by a standard front mic and the other mics scattered throughout the hall, to make a filter that will alter a recording accordingly.

Sound from a microphone in the back, for instance "has been modified by every surface in the room and every delay, and early reflections and late reflections. We don't know what they are, but they're in the signal. Our filter modifies the front signal to sound like the back signal, without having to solve all the equations," he said.

To make the filters, the researchers run each microphone through an algorithm based on adaptive filter theory. "The recording in front... is my reference, and a recording in the back of the same music is the goal I am trying to reach. And then I just let the filter iterate and say, 'keep changing this front thing until it sounds like the back thing,'" he said.

The process takes about eight hours on an 800 MHz computer. "When it's done we listen to it, and then we compared it to the real thing, and say 'it's not quite there,' and then we have to make some intelligent guesses as to why. Then we make some tweaks," to the algorithm's parameters, and run the sound through again. Each channel, or microphone takes three or four cycles and several days before it is complete. Once all 10 or so microphones are done, the filters for that hall are complete and can be used on any recording.

There are four major types of changes the algorithm is making to the sound to essentially throw it to a virtual microphone. These are changes, or cues that the human brain uses to determine where sound is coming from. They include early reflections, reverberation, high frequency attenuation and height.

One of the first clues to how the structure of the room affects sound are early reflections, which bounce off sidewalls and give us a sense of width, said Kyriakakis.

Next, we gain clues about the size of the room from reverberation, a phenomenon that happens after sounds bounce around for a few thousands of a second, he said. "Psychoacoustically it gives our brain a sense of distance. The ratio between the direct sound and the reverberant sound is what you can fool with to give the apparent distance of a source."

As microphones get further away from the source, high frequencies fade faster than low frequencies because the air absorbs them at a higher rate. This effect becomes important in a large concert hall said Kyriakakis.

The sense of height is also very important, said Kyriakakis. "It sounds a little strange because of course no instruments are out there, but we placed microphones 70 feet above the orchestra, hanging from the ceiling. When we play that back -- appropriately filtered over... loudspeakers that are hanging from the ceiling pointed down the listener -- if you close your eyes the sense of depth of the room increases dramatically. It gives you the sense that the stage is bigger," he said.

The filters are also useful for transmitting multichannel audio over a network, because if only one channel is transmitted, along with the filters, the other channels can be recreated on the other side. In addition, the filters for any given concert hall only need to be sent once.

Currently, it takes about as much time to create the multiple channels as the original recording was long. For instance, a 10 minute recording would require a 10 minute delay before all the channels were ready to play. The researchers are working on a more efficient, real-time version of the software that would allow for streaming, multichannel audio. "We're not there yet, but the idea would be you could have it as a plug-in so... as the music comes in it goes through these filters before going through your speakers,” said Kyriakakis.

Further out, the researchers plan to tackle the problem of using the filters on a recording that wasn't originally recorded in that concert hall. "The difficulty is that it involves one additional, difficult step of completely removing the acoustics of the existing hall from the recording," said Kyriakakis.

"It's a nice application," said Angelos Katsaggelos, professor of electrical and computer engineering at Northwestern University. "It's a challenging problem just being able from one file to generate the sound as if it were coming from different directions or recorded from different mics," he said.

"From a practical point of view... there are products out in the market like audio DVD players that support multichannel recordings but [recording] with 10 and 16 mics... is hard and not done routinely," said Katsaggelos. In addition, "it is a contribution in sync with the direction of major developments in the area of multimedia processing and immersive reality," he said.

A professional version of the software for creating multichannel recordings could be technically feasible within six months, said Kyriakakis. It will take a year to 18 months to produce a real-time version of the filters, he said.

Kyriakakis' colleague in the research is Athanasios Mouchtaris. They presented their results at the International Conference on Multimedia in New York in July, 2000. The research is funded by the National Science Foundation (NSF).

Timeline:   6 months; 1-1 1/2 years
Funding:   Government
TRN Categories:   Signal Processing; Applied Computing
Story Type:   News
Related Elements:   Audio Clip 1; Audio Clip 2; Audio Clip 3; Technical paper, "Virtual Microphones for Multichannel Audio Applications," presented at the International Conference on Multimedia, July, 2000 in New York




Advertisements:



November 29, 2000

Page One

Molecular motor shifts speeds

Virtual mic carries concert hall sound over 'net

Researchers ready radio-receiver-on-a-chip

Proton memory is ultracheap but slow

Crystal forms gas-triggered switch


 

News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

RSS Feeds:
News  | Blog  | Books 



Ad links:
Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN


© Copyright Technology Research News, LLC 2000-2006. All rights reserved.