After almost a year-long run, the Jaguar supercomputer at Oak Ridge National Laboratory in Tennessee has relinquished its title as the world’s fastest computer. This honor now belongs to the Tianhe-1A supercomputer located in the National Supercomputing Center in Tianjin, China.
Tianhe-1A is expected to officially become the leader of the TOP500.org list of the world’s fastest supercomputers sometime in mid-November. It clocked an impressive 2.507 petaflops on the LINPACK scale, which is about the sum of the performance of supercomputers #6 to #10 on the Top 500 list, according to insideHPC. Jaguar, now the second most powerful supercomputer in the world, had a peak performance of about 1.75 petaflops.
Although Tianhe-1A may re-ignite the anxiety in the West that usually accompanies news of great achievements from East Asia, this is not the first time that America or Europe had lost the #1 place on the Top 500. In 2002, Japan captured the top spot with their Earth Simulator (ES) supercomputer, which remained the world’s fastest until September of 2004 when IBM’s Blue Gene/L cluster at Argonne National Laboratory surpassed it. The quasi-geopolitical competition for computing power is far from over, but China’s ascendancy is actually one of the less interesting things about Tianhe-1A.
Tianhe-1A can potentially usher in a new era in “personal supercomputing”. It is the first leader of the Top 500 to make extensive use of GPUs (Graphics Processing Units). In fact, it is comprised of 7,168 NVIDIA Tesla M2050 GPUs and 14,336 Intel CPUs. In comparison, Jaguar has 37,376 AMD CPUs and no GPUs.
What is the significance of this? I think this may mark the tipping point in the supercomputing market where GPUs will become much more widespread than they already are. Because GPUs depart from the traditional linear approach to number crunching of the CPUs, they can compute certain tasks much more efficiently than CPUs. Working together, GPUs and CPUs in a computing cluster (or even just one system) can deliver much greater performance for the same cost as a traditional cluster.
The diagram below shows this principle applied to the Top 500 list of supercomputers. The blue line is the current performance of those clusters, while the green line demonstrates how much faster those same systems would have run if they had been built, for the same cost, with a combination of CPUs and GPUs. Admittedly, this graph is from NVIDIA, the leading vendor of GPUs. Nevertheless, that Tianhe-1A managed to increase performance in petaflops by 40% over Jaguar while reducing total processing units by almost half speaks to the advantages of a hybrid processing (CPU + GPU) cluster as well.
GPUs also make a high-performance computing (HPC) cluster more efficient. According to the press release by NVIDIA, Tianhe-1A is three times more energy efficient than if it had been built only with CPUs to run at 2.507 petaflops – “the difference in power consumption is enough to provide electricity to over 5,000 homes for a year”.
Yesterday, MATLAB, a leading mathematical application with over one million users, announced support for GPUs. More and more of the top software packages in the fields of life science, oil and gas exploration, finance, defense, and others are following suit.
I think the main story of Tianhe-1A is not that China has overtaken the U.S. and Europe in the supercomputing race. Rather, it is that Tianhe-1A can popularize non-traditional computing technologies like GPUs to make the idea of a “personal supercomputer” (for all researchers, scientists, analysts, etc.) more of a reality. Tianhe-1A is not only the first supercomputer from China to lead the Top 500, it is also an entirely different type of supercomputer.