Top of the page

Server Talk Episode 2: What are GPUs?

As we wrap up a week of the GPU Technology Conference, GPUs are obviously a hot topic on everybody™s mind. GPUs “ graphics processing units “ have been in production for years, utilized in everything from gaming and media to scientific and engineering computation. But do you know exactly what a GPU is and what it does?

What is a GPUs?

""Server Talk"" hosts Alexey and Mike are back with episode 2 to answer this very question!   A GPU is a processor¦ much like a CPU. It executes tasks given to it by the computer. But while a CPU might have something like one, two, four, eight, etc. cores, GPUs can have hundreds and even thousands. So why do we still use CPUs you might ask? Although GPUs have far more processor cores than CPUs, each of these GPU cores run much slower than a CPU core. They also do not have the features necessary for modern operating systems, and they are not appropriate for performing most of the processing in everyday computing. CPUs are serial-based processors: they execute tasks one at a time in sequence, and they execute each task at a significantly faster rate than GPUs. GPUs are parallel-based processors: they execute many tasks all at once, but they execute each task at a significantly slower rate than GPUs. In other words, these two types of processors have different architectures which make them suited for different types of applications. A GPU is great for large amounts of data in many streams, performing relatively simple operations on them. But, it fails to perform effectively when dealing with heavy or complex processing on a single or few streams of data. A CPU, on the other hand, is far more powerful on a per-core basis (in terms of instructions per second) and is able to effectively perform complex operations on a single or few streams of data more easily. However, it cannot efficiently handle many streams simultaneously. To sum up, GPUs excel in environments where there are many smaller-scale calculations that bog down the serial-nature of the CPU.

History of GPUs

Originally, the primary purpose of GPU applications were for rendering graphics only. The history of graphics chips can be traced back to the 1980s, but the first consumer-level and œmodern equivalent to what we think of as a GPU was the NVIDIA® GeForce 256 (also called NV10) which was released in 1999. NVIDIA® marketed it as œthe world™s first ˜GPU™ and is generally credited with popularizing the term. As technology advanced, users began to see how the large number of cores in GPUs relative to CPUs could benefit computational capabilities. Following the launch of GeForce 256, computer scientists and domain scientists from various fields started using GPUs to accelerate a range of scientific applications. GPUs could process many parallel streams of data simultaneously. Thus, the idea of gpu computing emerged.

GPU Computing

GPU computing is the use of a GPU (graphics processing unit) together with a CPU to accelerate the performance of applications. This heterogeneous computing model allows GPUs to complement CPUs: compute-intensive portions of an application are offloaded to the GPU, while the remainder of the code runs on the CPU. From a user's perspective, applications simply run significantly faster. Users achieved unprecedented performance (over 100x compared to CPUs in some cases). Today, GPU computing has a major presence that continues to grow. And, as a pioneer of GPU technology, NVIDIA® leads the industry with their GeForce, Quadro, and Tesla® GPU lines. In particular, Tesla® is used extensively in high performance computing. Tesla® GPUs power some of the fastest supercomputers in the world, bolstering the advance of scientific discoveries and research in a wide range of applications: computational finance, data mining, medical imaging, molecular dynamics, weather & climate, and much more!

Kepler - The Next Generation of GPU

NVIDIA® bills Kepler as œthe world™s fastest and most efficient high performance computing (HPC) architecture. From their conception, Tesla® GPUs were designed from the ground-up to accelerate scientific and technical computing workloads. With the most recent Kepler architecture in place, Tesla® GPUs are offering triple the performance compared to the previous architecture (Fermi), as well as a wide range of cutting-edge technology pillars that comprise the foundation of the Kepler compute architecture. These translate into highly advanced features that dramatically advance the programmability and efficiency of GPU computing. Three of these main features are:
  1. SMX (streaming multiprocessor) designwhich delivers up to 3x more performance per watt compared to Fermi, as well as one petaflop of computing in just ten server racks
  2. Dynamic Parallelism, a unique technology thatenables GPU threads to automatically spawn new threads, greatly simplifying parallel programming and enabling GPU acceleration of a broader set of popular algorithms
  3. Hyper-Q, a feature that enables multiple CPU cores to simultaneously utilize the CUDA cores on a single Kepler GPU, dramatically raising GPU utilization while cutting CPU idle times
From official NVIDIA material on on the Tesla® K-series family of products:
Tesla® K10 GPU Accelerator “ Optimized for single precision applications, the Tesla® K10 includes two ultra-efficient GK104 Kepler GPUs to deliver high throughput. It delivers up to 2x the performance for single precision applications compared to the previous generation Tesla® M2090 GPU in the same power envelope. With an aggregate performance of 4.58 teraflop peak single precision and 320 gigabytes per second memory bandwidth for both GPUs put together, the Tesla® K10 is optimized for computations in seismic, signal image processing, and video analytics. Tesla® K20 GPU Accelerator “ Designed to be the performance leader in double precision applications and the broader supercomputing market, the Tesla® K20 GPU Accelerator features a single GK110 Kepler GPU that includes the Dynamic Parallelism and Hyper-Q features. With more than one teraflop peak double precision performance, this GPU Accelerator is ideal for the most aggressive high-performance computing workloads including climate and weather modeling, CFD, CAE, computational physics, biochemistry simulations, and computational finance.

To learn more about how International Computer Concepts can help you leverage the benefits of graphics processing units, check out our GPU solutions GPU Solutions. To learn more about the 2013 GPU Technology Conference:
The GPU Technology Conference (GTC) advances global awareness of GPU computing, computer graphics, game development, mobile computing, and cloud computing. Through world-class education, including hundreds of hours of technical sessions, tutorials, panel discussions, and moderated roundtables, GTC brings together thought leaders from a wide range of fields.

Server Talk Episode 2: What are GPUs?

As we wrap up a week of the GPU Technology Conference, GPUs are obviously a hot topic on everybody™s mind. GPUs “ graphics processing units “ have been in production for years, utilized in everything from gaming and media to scientific and engineering computation. But do you know exactly what a GPU is and what it does?

What is a GPUs?

""Server Talk"" hosts Alexey and Mike are back with episode 2 to answer this very question!   A GPU is a processor¦ much like a CPU. It executes tasks given to it by the computer. But while a CPU might have something like one, two, four, eight, etc. cores, GPUs can have hundreds and even thousands. So why do we still use CPUs you might ask? Although GPUs have far more processor cores than CPUs, each of these GPU cores run much slower than a CPU core. They also do not have the features necessary for modern operating systems, and they are not appropriate for performing most of the processing in everyday computing. CPUs are serial-based processors: they execute tasks one at a time in sequence, and they execute each task at a significantly faster rate than GPUs. GPUs are parallel-based processors: they execute many tasks all at once, but they execute each task at a significantly slower rate than GPUs. In other words, these two types of processors have different architectures which make them suited for different types of applications. A GPU is great for large amounts of data in many streams, performing relatively simple operations on them. But, it fails to perform effectively when dealing with heavy or complex processing on a single or few streams of data. A CPU, on the other hand, is far more powerful on a per-core basis (in terms of instructions per second) and is able to effectively perform complex operations on a single or few streams of data more easily. However, it cannot efficiently handle many streams simultaneously. To sum up, GPUs excel in environments where there are many smaller-scale calculations that bog down the serial-nature of the CPU.

History of GPUs

Originally, the primary purpose of GPU applications were for rendering graphics only. The history of graphics chips can be traced back to the 1980s, but the first consumer-level and œmodern equivalent to what we think of as a GPU was the NVIDIA® GeForce 256 (also called NV10) which was released in 1999. NVIDIA® marketed it as œthe world™s first ˜GPU™ and is generally credited with popularizing the term. As technology advanced, users began to see how the large number of cores in GPUs relative to CPUs could benefit computational capabilities. Following the launch of GeForce 256, computer scientists and domain scientists from various fields started using GPUs to accelerate a range of scientific applications. GPUs could process many parallel streams of data simultaneously. Thus, the idea of gpu computing emerged.

GPU Computing

GPU computing is the use of a GPU (graphics processing unit) together with a CPU to accelerate the performance of applications. This heterogeneous computing model allows GPUs to complement CPUs: compute-intensive portions of an application are offloaded to the GPU, while the remainder of the code runs on the CPU. From a user's perspective, applications simply run significantly faster. Users achieved unprecedented performance (over 100x compared to CPUs in some cases). Today, GPU computing has a major presence that continues to grow. And, as a pioneer of GPU technology, NVIDIA® leads the industry with their GeForce, Quadro, and Tesla® GPU lines. In particular, Tesla® is used extensively in high performance computing. Tesla® GPUs power some of the fastest supercomputers in the world, bolstering the advance of scientific discoveries and research in a wide range of applications: computational finance, data mining, medical imaging, molecular dynamics, weather & climate, and much more!

Kepler - The Next Generation of GPU

NVIDIA® bills Kepler as œthe world™s fastest and most efficient high performance computing (HPC) architecture. From their conception, Tesla® GPUs were designed from the ground-up to accelerate scientific and technical computing workloads. With the most recent Kepler architecture in place, Tesla® GPUs are offering triple the performance compared to the previous architecture (Fermi), as well as a wide range of cutting-edge technology pillars that comprise the foundation of the Kepler compute architecture. These translate into highly advanced features that dramatically advance the programmability and efficiency of GPU computing. Three of these main features are:
  1. SMX (streaming multiprocessor) designwhich delivers up to 3x more performance per watt compared to Fermi, as well as one petaflop of computing in just ten server racks
  2. Dynamic Parallelism, a unique technology thatenables GPU threads to automatically spawn new threads, greatly simplifying parallel programming and enabling GPU acceleration of a broader set of popular algorithms
  3. Hyper-Q, a feature that enables multiple CPU cores to simultaneously utilize the CUDA cores on a single Kepler GPU, dramatically raising GPU utilization while cutting CPU idle times
From official NVIDIA material on on the Tesla® K-series family of products:
Tesla® K10 GPU Accelerator “ Optimized for single precision applications, the Tesla® K10 includes two ultra-efficient GK104 Kepler GPUs to deliver high throughput. It delivers up to 2x the performance for single precision applications compared to the previous generation Tesla® M2090 GPU in the same power envelope. With an aggregate performance of 4.58 teraflop peak single precision and 320 gigabytes per second memory bandwidth for both GPUs put together, the Tesla® K10 is optimized for computations in seismic, signal image processing, and video analytics. Tesla® K20 GPU Accelerator “ Designed to be the performance leader in double precision applications and the broader supercomputing market, the Tesla® K20 GPU Accelerator features a single GK110 Kepler GPU that includes the Dynamic Parallelism and Hyper-Q features. With more than one teraflop peak double precision performance, this GPU Accelerator is ideal for the most aggressive high-performance computing workloads including climate and weather modeling, CFD, CAE, computational physics, biochemistry simulations, and computational finance.

To learn more about how International Computer Concepts can help you leverage the benefits of graphics processing units, check out our GPU solutions GPU Solutions. To learn more about the 2013 GPU Technology Conference:
The GPU Technology Conference (GTC) advances global awareness of GPU computing, computer graphics, game development, mobile computing, and cloud computing. Through world-class education, including hundreds of hours of technical sessions, tutorials, panel discussions, and moderated roundtables, GTC brings together thought leaders from a wide range of fields.