Supercomputers have become a vital part of almost any innovative project undertaken by collaborative teams in the developed world. Server clusters can be found anywhere from the offices of small businesses to compartments in U.S. Navy submarines.
So which are the fastest supercomputers on earth? The usual measurement for high-performance computer (HPC) clusters is the TOP500 ranking, which is based on the High Performance LINPACK (HPL) benchmark. LINPACK stands for “linear equations software package”, and the benchmark measures how fast a supercomputer can solve a system of linear equations. The results are reported in units of billions of floating point operations per second (GFLOPS).
The high-performance LINPACK metric has long been the established standard for measuring computing performance, with intense competition worldwide for the lead spot in the TOP500. But some scientists criticize the TOP500 ranking for creating an incomplete picture of how to measure performance. The risk, as Mark Anderson describes in an article in IEEE Spectrum magazine, is motivating computer hardware manufacturers to develop less-effective technologies.
So several groups have created alternatives to the TOP500 ranking. The latest of these is the Graph 500, whose first list was unveiled at the SC2010 conference. While the TOP500 favors systems that maximize processing power (whether through CPUs alone or with hybrid CPU-GPU systems), the Graph 500, in simple terms, is a measurement of how fast the processors can communicate with the system’s memory.
As the Graph 500 website points out, “Data intensive supercomputer applications are increasingly important HPC workloads, but are ill-suited for platforms designed for 3D physics simulations.” The Graph 500 focuses on the data-intensive applications, which are becoming extremely important for business analytics, finance, and any other field where vast data sets need to be evaluated.
As Anderson points out in his article:
For the past 15 years . . . every thousandfold increase in flops has brought with it a tenfold decrease in the memory accessible to each processor in each clock cycle . . . This means bigger and bigger supercomputers actually take longer and longer to access their memory. And for a problem like sifting through whole genomes or simulating the cerebral cortex, that means newer computers aren’t always better.
Graph 500 is not trying to replace the TOP500, just complement it. They are cooperating with the SPEC committee (see below) to be included in their benchmarks as well. The new Graph 500 standards are drawing attention to the important fact that contemporary HPC applications vary widely in their requirements, and the TOP500 only encourages computer performance that’s suited to a limited set of those applications.
The Green500 is a riff on the TOP500 rankings, but with an obvious twist: it takes into account the energy efficiency of a supercomputer in the TOP500. The formula for determining rankings in the Green 500 is simple:
LINPACK Benchmark score divided by energy consumption (MFLOPS/watt)
Like the TOP500, the Green500 is weighted towards processor performance. Interestingly, according to their website, accelerator-based supercomputers (particularly systems with graphics processing units, GPUs) hold 8 of the 10 highest spots on the Green500, including the first. Since this ratio is much higher than in the TOP500, it suggests that hybrid CPU-GPU systems are much more energy efficient than HPC clusters with only CPUs.
Like the Graph 500 and memory access rates, the Green500 draws attention to a facet of supercomputing performance that the standard TOP500 measurement overlooks: energy savings.
HPC Challenge (HPCC)
The HPC Challenge benchmark was developed by the federal government several years ago, and is a popular complement to LINPACK. HPCC consists of 7 different tests, the first of which is the high-performance LINPACK itself. The other six tests measure mostly network communication and memory accessibility.
The HPCC, while it has a similar focus on cluster memory, differs from the Graph 500 because the latter is specifically designed to test performance of data-intensive applications. The HPCC is a very robust alternative to the TOP500 and its battery of tests are used by IT professionals that are designing computer clusters to evaluate the potential performance of different configurations.
Standard Performance Evaluation Corporation (SPEC)
SPEC is a non-profit organization that is focused on developing performance benchmarks for applications outside of the high-end scientific and industrial computing fields. Its membership includes corporations such as Apple, Dell, IBM, and Sun Microsystems. One of their unique focuses is on web server technology, and several of their tests involve running PHP, Java, and mail and file servers.
Nevertheless, some SPEC CPU tests measure the performance of a system running several standard scientific applications. SPEC’s focus on application-specific testing is another indication that, for most IT professionals, the TOP500 standard for measuring computing performance does not translate in a practical way to all computing use-cases.
The emergence of new benchmarks for computing performance beyond the TOP500 is a result, it seems to me, of HPC growing beyond its traditional bounds of elite exclusivity. More and more labs and corporations are utilizing computer clusters for advanced applications that help them supercharge their experiments, simulations, or analytics.
The diversity of applications requires a different set of standards for measuring computing performance. Metrics such as the Graph 500 and HPCC expand the focus of the TOP500 to more memory-intensive tasks, the Green500 draws attention to the efficiency of computer clusters, and SPEC measures the performance of real-world software applications across different hardware systems. These benchmarks may not be as flashy as the TOP500, but they’re probably more practical and encourage manufacturers to develop computers that meet the challenges of tomorrow.