1 877 422 8729

Just over the weekend, I have discovered (through my RSS feeds) fascinating new uses for GPU computing. I thought I would describe one of them in short detail here for your interest (others to come later this week).

GPUs have their origin in computer gaming, where graphics cards were equipped with their own micro-processors to handle large quantities of simple calculations to ease the computational load off of the CPU.

The NewScientist has published an article (accessed via HPCwire.com) about how a scientist at MIT is using GPUs to discover how the human brain recognizes objects in images. In essence, he’s doing the reverse of what GPUs were used for in the gaming industry: instead of creating a virtual world of a computer game, he is using GPUs to analyze the “real world” inside a human mind.

An astonishing facet of this research is the hardware that Nicolas Pinto, the MIT researcher, used:

Last year, for less than $3000, [Pinto] built a 16-GPU “monster” desktop supercomputer to generate and test over 7000 possible variations of an object-recognition algorithm on video clips.

[. . .] He says this kind of work would previously have only been possible with a fully fledged supercomputer.

“If we weren’t newcomers in this field and could apply for multi-million dollar grants, then yes, we could probably get one of these massive computers from IBM,” he says. “But if money is an issue, or you are a newcomer, that is too expensive. It’s very cheap to buy a GPU and explore.

As Mr. Pinto’s example shows, GPUs make supercomputing available to just about everyone, regardless of their budget. Also, if you’re interested, check out his flickr.com photos of the 16-GPU system he built – very cool.