sealPurdue News
_____

July 1997

Research accelerates toward faster personal computers

WEST LAFAYETTE, Ind. -- The technology that makes the fastest computers so fast -- parallel processing -- is starting to wend its way from the research community into personal computers, and a Purdue University engineer is helping speed that delivery.

Like the manager of a large corporation, Purdue's Rudolf Eigenmann is finding ways to optimize the performance of his electronic "workers" -- coaxing more speed and efficiency from computers and computer programs.

Eigenmann, assistant professor of electrical and computer engineering, works primarily with high-performance computers used in academia and research -- the souped-up, super-fast models that are up to a million times faster than the average personal computer. More computing speed could lead to more accurate weather predictions, new computer-designed drugs, safer and more aerodynamic cars and planes, and sophisticated simulations for disaster prediction and recovery.

Eigenmann has developed a computer program, called a compiler, that automatically translates conventional computer programs so they can run on a parallel processing computer, which makes the program run faster. The compiler can't be used on PC programs yet, but Eigenmann says it may only be a matter of time. "The sort of parallelism that has been dealt with mostly in the research community over the past 10 to 20 years is slowly trickling into the PC market," he says.

Eigenmann will present research on parallel programming Aug. 7-9 at the Workshop on Language and Compilers for Parallel Computing in Minneapolis.

The main difference between a parallel processing machine and a typical personal computer is in the number of microprocessors, or chips. A personal computer usually has one microprocessor. As manufacturers have packed more tiny circuits on each chip, the chips have become faster and more powerful, able to perform more and more operations per second. But each chip still can do only one, or a few, operations at a time, and the faster the chip, the more expensive it is to make. Parallel computers are faster than PCs because they use tens or hundreds of chips that operate simultaneously.

"In parallel, the chips all work at the same time," Eigenmann explains. "We just split the problem -- the computer program -- into ten parts, or a hundred or a thousand parts. Each chip then takes one part and executes it in parallel with the other chips."

Eigenmann explains parallelism with an analogy of thousands of workers performing a job.

"If the job has enough independent subparts that you can assign to each worker, the process will go faster," he says. "The more workers you have, the faster it will go. But if the job requires people to communicate with each other, or if one has to wait for another to perform a task, then there is not much 'parallelism' in that job. In that case, the only way to make the system faster would be to increase the performance of the individual worker, or, in the case of a computer, the chip."

In one of Eigenmann's research projects, he plans to combine microprocessors of different speeds, like workers of different abilities, in an effort to build a parallel processing machine capable of a quadrillion floating point operations per second -- a "petaflop" computer, for short. Such a computer would be one thousand times faster than the fastest computers currently available.

"If cost were no object, I would put a million of the fastest individual chips available into a parallel processor, but no one could ever afford such a machine," Eigenmann says. "Using a combination of slow chips, which are less expensive, and a few fast chips, I think we can come up with an architecture for a truly usable petaflop computer that is not far beyond the cost of current high-performance computers."

Last fall, the National Science Foundation, in conjunction with NASA and the Department of Defense, funded eight research projects pursuing petaflop computing, including Eigenmann's.

Parallel machines cost millions or tens of millions of dollars, and mid-sized parallel machines can cost several hundred thousand dollars. But the parallelism that was once found only in the research community is slowly finding its way into in personal computers.

"There are machines available now that have two or four Pentium processors in them, and these can cost $10,000 to $30,000, relatively affordable for a research lab or a business," Eigenmann says. "With four Pentium processors, for example, these machines could do up to two billion calculations per second, which is enormously fast compared to the first supercomputers."

In another research project, Eigenmann developed the compiler computer program, called POLARIS, which identifies parallelism in conventional programs, then automatically extracts those parts and assigns them to different processors in a parallel computer.

"This compiler is now used primarily for scientific and engineering applications, where it's easier to find parallelism than in other programs," Eigenmann says. "But as more two- and four-processor computers show up on our desks, if I can give you a compiler that automatically achieves a twofold or fourfold performance increase in your application, that's an immediate, very practical tool."

Currently, POLARIS works only on programs written in the FORTRAN computer language, but Eigenmann expects to broaden the range of POLARIS to work on programs written for PCs, which are typically in the C, C++ or Java languages.

Source: Rudolf Eigenmann, (765) 494-1741; e-mail, eigenman@ecn.purdue.edu
Writer: Amanda Siegfried, (765) 494-4709; e-mail, amanda_siegfried@purdue.edu
Purdue News Service: (765) 494-2096; e-mail, purduenews@purdue.edu


* To the Purdue News and Photos Page