U.S. Wins In Supercomputer Wars

IBM, Silicon Graphics systems at government labs take the top two spots on fastest supercomputer list.

November 12, 2004

3 Min Read
Network Computing logo

Two years after losing their technical lead in the supercomputing race, U.S. manufacturers reclaimed preeminence in the field last week, as systems designed by IBM and Silicon Graphics Inc. for government contracts were named the world's fastest.

IBM's Blue Gene/L, being installed at Lawrence Livermore National Laboratory in California, is now the world's fastest computer, capable of a staggering 70.72 trillion computations per second. That's double the capacity of the previous fastest system, the Japanese government's Earth Simulator, installed by NEC in Yokohama, Japan, in 2002 and capable of sustaining 35.86 trillion floating point operations per second. The Earth Simulator, which stunned technologists in the U.S. government and industry when it claimed the mantle of world's fastest, slipped to No. 3 on a closely watched list of the world's 500 fastest supercomputers assembled by a group of computer scientists.

No. 2 on the new Top 500 list released earlier this week is SGI's Columbia system at NASA's Ames Research Center in Silicon Valley. That system can sustain 51.87 teraflops. The results were achieved on the Linpack benchmark, which involves solving a complex series of mathematical equations, and released at a supercomputing conference held in Pittsburgh last week.

"The U.S. is No. 1 and No. 2 on the Top 500--that's clearly the top story," says Steve Wallach, a longtime supercomputer designer who is now a VP of technology at Chiaro Networks Ltd., which makes a high-speed optical router.

U.S. technology is progressing rapidly--IBM says it's on track to quadruple the size of the Blue Gene/L system it's assembling for Livermore to achieve a benchmark result of 360 teraflops by next year, 10 times as fast as the Earth Simulator. And U.S. manufacturers could reach a petaflop of performance--one quadrillion operations per second--by 2008 or sooner. "This is really certifying the vitality of the American computer manufacturers," says Thom Dunning, the incoming director of the National Center for Supercomputing Applications in Illinois. "It's important to note that it was done by two vendors using two very different paths." IBM's system uses more than 32,000 embedded processors designed for low power and fast, on-chip data movement, whereas SGI has built fast interconnections between more than 10,000 Intel Itanium processors. Both approaches could yield more affordable computing power.But many computer scientists are concerned that U.S. supercomputing is in danger of slipping behind again because the government isn't investing enough in the field. A panel of computer scientists convened by the National Research Council this week released a report prepared for the Energy Department that called for the government to increase its funding for high-performance computing to $140 million per year, nearly $100 million more than the government is estimated to spend on the field today. In the report, called "Getting Up to Speed: The Future of Supercomputing," the panel warned that the government needs to make a long-term investment in high-performance computing technologies to give U.S. scientists the best tools for fields including nuclear weapons stockpile stewardship, intelligence, and climate research. "The pipeline is now drying up," says Wallach, a member of the committee that prepared the report.

Jack Dongarra, a computer science professor at the University of Tennessee who also served on the committee, and who helps assemble the bi-annual Top 500 supercomputer list, says the U.S. isn't investing enough in software programming tools and algorithms that can keep pace with the huge hardware systems being assembled. "We're using a programming paradigm--Fortran and C--that's been around since the '70s," he says.

Supercomputers keep supersizing: 398 of the 500 fastest machines on the current list exceed 1 teraflop, and all 500 will within six months, according to Dongarra. Meanwhile, clusters are expanding as users gang together thousands of commodity processors to perform jobs that could be done by fewer, more expensive processors. "That may be cheaper," says Dongarra, "but you end up investing more in the programming. Some people say you pay more for the software than you do for the hardware, and that washes away the advantage."

Read more about:

2004
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights