Cuda GPU Technology
ICC's Solutions are powered by NVIDIA®'s CUDA GPU architecture
At ICC, we integrate our top technology with NVIDIA's® CUDA GPU architecture, the simplest way for you to purchase, utilize, and manage a GPU-based cluster. GPU supercomputing has never been easier than with our NovaServ solutions, providing optimal value by minimizing time spent dealing with the technology and allowing you to focus on what you do best.
If you are interested to learn more about GPU supercomputing with our CUDA-powered solutions contact an ICC GPU System Sales Engineer and discover why ICC and NVIDIA® are right for you.
A History of GPU and Cuda
The first GPU was introduced to the market by NVIDIA® back in 1999, although many parties contributed to the development of the technology prior. It was initially used for scientific computing, specifically in fields such as medical imaging and electromagnetics, as well as computer science. The excellent floating point performance in GPUs led to a huge performance boost for a range of scientific applications. From these new trends came the GPGPU (General Purpose computing on GPUs) .
However, GPGPU required using graphics programming languages like OpenGL and Cg to program the GPU which meant that developers were constrained in having to make their scientific applications look like graphics applications and map them onto programs that drew triangles and polygons. This limited the accessibility of GPUs for science.
NVIDIA® realized the potential to bring this performance to the larger scientific community and decided to invest in modifying the GPU to make it fully programmable for scientific applications and added support for high-level languages like C, C++, and Fortran. This led to the CUDA architecture for the GPU.
CUDA is a parallel computing platform and programming model invented by NVIDIA® which enabled dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). CUDA architecture consists of 100s of processor cores that operate together to crunch through the data set in the application. With this architecture, NVIDIA revolutionized the GPGPU and GPU technology, now producing systems with teraflops (one trillion floating point calculations per second) of performance.
Tesla GPU Line
NVIDIA's Tesla GPU solutions combine impressive parallel processing power with the capacity to cluster and scale to meet needs as they grow larger and more complex. With configurations ranging from 10 to 42 teraflops, they meet a range of computation needs, significantly outperforming CPU-only systems.
The Tesla 20-series GPU is based on the "Fermi" architecture, which is the latest CUDA architecture. Fermi is optimized for scientific applications with key features such as 500+ gigaflops of IEEE standard double precision floating point hardware support, L1 and L2 caches, ECC memory error protection, local user-managed data caches in the form of shared memory dispersed throughout the GPU, coalesced memory accesses and more.
Tesla is Built for Performance and Reliability
- Full double precision floating point performance
- 515 GigaFlops on Tesla C2050, M2050, and S2050 products
- ECC protection for uncompromised data reliability
- For memories inside the GPU and the external GDDR5 memory
- Zero error tolerance stress testing
- Faster PCIe communication
- The only NVIDIA product with two DMA engines for bi-directional PCIe communication