Processor design tunes memory on the flyBy Kimberly Patch, Technology Research NewsResearchers at the University of Rochester have found a way to make on-the-fly adjustments to the many memory devices contained in microprocessors. The plan promises to boost processing speed and lower energy use. The researcher's Complexity-Adaptive Processing (CAP) scheme consists of hardware modifications to standard chips and software that monitors the way programs run. The software tracks the way programs use random access memory (RAM) devices on chips, and adjusts RAM size accordingly. The hardware modifications allow for these adjustments. The CAP scheme should make chips more efficient in a way similar to adding a thermostat to a home heating system, said Albonesi. The thermostat monitors the temperature in a house and varies the output of heat based on that feedback, rather than just blasting the same amount of heat all the time. In a similar way, CAP is "a feedback and control system incorporated into the microprocessor," said David Albonesi, an assistant professor of electrical and computer engineering at Rochester University. A typical computer chip is fairly rigid. It is made up of millions of components clustered in sections that are assigned specific tasks. Many of these sections, or hardware structures, include RAM. RAM structures include caches, branch predictors, translation look aside buffers, and instruction cues. The amount of RAM these sections of the chip contain is set at an unchangeable size, and those settings are necessarily compromises. For example, computer chips typically have first level and second level memory caches, or buffers. Chip designers decide on cache sizes depending on the average sizes software programs need. Because there is a speed trade-off with cache sizes -- the smaller the cache, the faster it is -- it is useful to have as small a first level cache as possible. If the cache turns out to be too small, the program goes to the much larger, and significantly slower, second level cache. It's "a fundamental trade-off that designers struggle with... sometimes when you're running a certain program you want a small fast cache and other times you want a bigger slower cache," said Albonesi. In the end, the designer's must decide on a compromise that will make the most programs run the fastest. The hardware modifications work like this: as computer chips have become more complicated they have required longer and longer wires to connect components. The longer the wire, the longer it takes for a signal to get to the other side. Some of today's wires are long enough to create unacceptable signal delays. This is remedied by putting repeaters in the middle of the wires to speed the signal. Today's complicated chips may have hundreds or thousands of wire repeaters, according to Albonesi. His scheme calls for replacing the repeaters with switches "so that you can turn things on or off." Although RAM is set at certain sizes, it is segmented with wires. "A big RAM consists of several smaller RAM's and that's done both for performance and power reasons," said Albonesi. The CAP system modifies RAM size by using the switches to turn off some of the RAM segments assigned to a cache device to make the cache smaller. The CAP software can determine an optimum size and make the switch on the order of tens of clock cycles, said Albonesi. This means, for example, it would take a 500 MHz computer on the order of 10-100 clock cycles, or a 50th to a fifth of a second, to make the change. This allows the cache trade-off to be decided both by what program is running and by what a given program needs at a certain time. "The trade-off can change even when a single program is executing," said Albonesi. Optimizing the RAM size speeds computing, he said. In addition, turning off parts of the RAM that are not being used saves power. Albonesi is quick to point out that his research is not competing with Field Programmable Gate Arrays (FPGA's) which are chips made up of units that can be reconfigured in many ways, but is simply using one type of reconfigurability to make standard chips more efficient. Although the idea of switching parts of the computer chip on and off to save power is not new, "making pieces smaller dynamically sounds like a clever idea if you're not worrying about area use and you're only worrying about performance and power," said John Wawrzynek, a professor of electrical engineering and computer science at the University of California at Berkeley. "It sounds like he has taken a different approach on the granularity of reconfigurability. The traditional reconfigurable computing approach uses fairly fine-grained elements that are reconfigurable and based on FPGA type devices. ... [the CAP approach] is more on a coarse level," said Wawrzynek. The researchers next step is a three-year plan to design a prototype chip in collaboration with IBM Research. Their goal is to increase efficiency and decrease power use by a combined factor of five to ten times, said Albonesi. Albonesi's colleagues in the research are Sandhya Dwarkadas, Eby Friedman and Michael Scott, all of Rochester University. The work was funded by the National Science Foundation. Their continuing work will be funded by DARPA. Commercial chips that incorporate the scheme are at least four or five years away, Albonesi said. Timeline: > 4 years Funding: NSF, DARPA TRN Categories: Integrated Circuits Story Type: News Related Elements: Diagram Advertisements: |
September 20, 2000 Page One Hue-ing to quantum computing Robots emerge from simulation Software sorts Web data Processor design tunes memory on the fly Superconducting transistor debuts News: Research News Roundup Research Watch blog Features: View from the High Ground Q&A How It Works RSS Feeds: News | Blog | Books Ad links: Buy an ad link
|
||||||||||
Ad links: Clear History Buy an ad link |
|||||||||||
| |||||||||||
© Copyright Technology Research News, LLC 2000-2006. All rights reserved. |
|||||||||||