Does
heavy volume smooth Net traffic?
By
Kimberly Patch,
Technology Research News
It's difficult to figure out the best way
to distribute traffic, whether you're talking about where to build a road
to alleviate rush-hour congestion or what type of Internet connection
is best. It is complicated enough that no one really knows precisely how
data packets flow through the deepest innards of the Internet.
Researchers from Lucent Technologies' Bell Laboratories have shown that
network traffic gets smoother when communications lines become fairly
full. These counterintuitive results mean that bigger is not necessarily
always better when it comes to network links.
An email message or Web page request is broken up into many packets of
data before traveling through a network like the Internet. The packets
are then reassembled when they arrive at the server hosting the e-mail
address or Web page.
An Internet connection with a light to medium amount of packet traffic
is bursty, meaning packets of data tend to travel in bunches, said William
Cleveland, a mathematician and statisticians at Bell Labs. Bursty traffic
on a road means clusters of cars interspersed with periods of little or
no traffic.
When a 45-megabits-per-second line has as many as 50 or 60 connections
per second, however, the traffic starts to smooth out, said Cleveland.
"The burstiness disappears and [packets] are more randomly distributed,"
he said.
The researchers noticed a trend toward smoothness when they were measuring
connections on the Internet for other research. They came up with a mathematical
theory that explained why this might happen, then tested the theory by
measuring traffic at six large Internet links.
"The theory tells you that the arrival times for packets should head to
a Poisson process," said Cleveland. A Poisson process is essentially random,
said Cleveland. "The [packet] inter-arrival times are independent of one
another," he said.
The finding is significant because smoother traffic means fewer dropped
packets. "It's better in the sense that you can use more of the link's
capacity. [A link] that's bursty will drop more packets than one that
has this Poisson character to it," said Cleveland. In addition, bursty
traffic requires larger buffer sizes in equipment like routers that queue
up data packets as they travel around the Internet.
When a packet of data travels around the Internet, it hops from server
to server to get to its ultimate destination, very much like a car that
has to turn onto different roads during a trip. Bursty car traffic means
a greater chance of a longer wait at an intersection. When a data packet
is delayed, however, it must be stored on a router's buffer, and if the
buffer is full, packets get dropped, meaning they must be sent again from
the original server.
Smoother traffic means fewer dropped packets and lower buffer requirements.
"If you can even out the load, the queuing is less, and if the queuing
is less the buffers can be smaller," said Cleveland. "Overall, the characteristics
of the traffic are very important for network design because... you've
got to know the nature of the traffic to know how you should design the
devices to accommodate it," he said.
The study is important work, but is not comprehensive enough to be conclusive,
said Ahmed Helmy, an assistant professor of computer networks at the University
of Southern California. "These kinds of studies are hard due to the dynamic
nature of the Internet traffic. We need [more] samples to get anywhere
near the big picture," he said.
Several previous studies of the Internet have shown bursty characteristics
even at high traffic levels, said Helmy. A comprehensive study of Internet
traffic would require a larger number of samples that use more protocols
over a long period of time, he said.
The Bell Labs studies looked at half a dozen links, but "there are potentially
hundreds of thousands of links in the Internet, with varying loads and
characteristics," Helmy said. A more representative sample of links would
be, for example, the 100 most important links that connect the biggest
ISPs, like AOL and MSN to the biggest backbones, like Sprint, MCI and
AT&T, said Helmy. The trouble is, "it is difficult for researchers to
get such information on links," he said.
Network software has several layers. The media layer determines how electrical
or optical signals carry data. The routing layer controls how packets
of data get from point A point to point B. The transport layer puts information
into small packets for sending and reassembles the information on the
receiving end. The application layer handles commands specific to programs,
like fetching a Web page or playing a music file.
Several versions of these layers of software are used on the Internet.
Different portions of the Internet use Ethernet, Asynchronous Transfer
Mode (ATM), or Synchronous Optical Network (SONET) at the media layer.
The application layer includes many different protocols like Hypertext
Transfer Protocol (HTTP) for accessing web pages, File Transfer Protocol
(FTP) for direct file transfers, and streaming audio for music files.
"The paper only looked at HTTP at the application layer, and four out
of six links traced were ATM networks," he said. ATM in particular may
sway the analysis because it transports data in fixed packet sizes, he
said.
The most difficult part of getting a comprehensive picture of Internet
traffic is the time dimension, Helmy said. Long-term studies need to be
carried out continuously in order to arrive at meaningful conclusions,
he said. "For example, two years ago most of the traffic was... Web traffic.
However, recent killer applications, namely Napster [have spawned] huge
amounts of music files... and those files have very different characteristics.
In the next years, perhaps another killer app such as videoconferencing
or short messaging between wireless devices may alter the current characteristics,"
he said.
Although difficult to achieve, a better understanding of Internet traffic
has the potential to increase network efficiency in many ways. In addition
to dictating the amount of buffer memory to include in hardware like routers
and switches that route traffic, the way traffic flows influences the
design of software protocols that control Internet congestion and queuing,
Helmy said. "You can draw parallels with building a highway. The designer
needs to know [the] average and maximum weights of cars [the road needs]
to withstand, and that leads to traffic analysis," he said.
The researchers plan to use their increased knowledge of traffic characteristics
to better design traffic engineering equipment, said Cleveland. Lucent
will also use the information to better size links for its ISP customers,
he said. "We want to size networks so they don't have to get more than
what they need," he said.
The work could find its way into commercial products in one to two years,
according to Bell Labs spokesperson Patrick Reagan.
Cleveland's research colleagues were Jin Cao, Dong Lin and Don X. Sun
of Bell Labs. The research was funded by Lucent Technologies.
Timeline: 1-2 years
Funding: Corporate
TRN Categories: Internet
Story Type: News
Related Elements: Bell Labs technical report, "Internet
Traffic Tends to Poisson and Independent as the Load Increases."
Advertisements:
|
August
15, 2001
Page
One
Atom lasers made easy
Molecule makes mini memory
Does heavy volume
smooth Net traffic?
Mind game smooths
streaming audio
Quantum effect
for chipmaking confirmed
News:
Research News Roundup
Research Watch blog
Features:
View from the High Ground Q&A
How It Works
RSS Feeds:
News | Blog
| Books
Ad links:
Buy an ad link
Advertisements:
|
|
|
|