Next time you feel like bemoaning the state of today’s big thumbed, small minded, video-game generation, think about the social benefit all those gaming man-hours are contributing to medical research.
A team from UC San Diego created an algorithm for CT scan image reconstruction using an NVIDIA GPU, discovering, in the process, the superior effectiveness in creating targeted images of tumor cells. This improvement over traditional medical scans reduces the total radiation cancer patients need to endure. Current technology requires repeated scans in order to produce a detailed enough image for doctors to identify potential tumors. Using GPU’s and gaming hardware, scientists were able to reduce the amount of radiation by a factor of as much as ten.
“In my mind, the most interesting and compelling possibilities of this technique are beyond cancer radiotherapy,” Steve Jiang, senior author of the study and a UCSD associate professor of radiation oncology, said in a statement. “CT dose has become a major concern of the medical community. For each year’s use of today’s scanning technology, the resulting cancers could cause about 14,500 deaths. Our work, when extended from cancer radiotherapy to general diagnostic imaging, may provide a unique solution to solve this problem by reducing the CT dose per scan by a factor of 10 or more.”
CT scanning is widely used and extremely useful in the field of computerized imaging. A scanner snaps a series of X-ray pictures whilst rotating around the subject body. The pictures are assembled to create a cross section of the body. These cross sections are then combined to generate a 3D image. Throw in a large number of Fourier transforms, where data about neighboring points is used to improve information on each individual point, mix in a few b-spline interpolations (I am not making this stuff up)- a mathematical technique that accurately fits smooth curves to data points-and you wind up with the sort of computational dynamic that benefits tremendously from parallel processing.
Where CPU’s perform interpolations one data point at a time, GPU’s can take multiple points and interpolate them in parallel. The high resolution (8000×8000 pixels) of the images and large file sizes make this the sort of computational problem ideally suited to parallel processing. This translates to a 3.6X speedup of segmentation time, compared with CPU-only processing on an Intel quad core Nehalem-class processor. More recent tests point to a speedup of up to 15X. Klaus Mueller at the State University of New York-Stony Brook found that using GPU processing could reduce the time needed for a CT scan reconstruction from 135 seconds to less than seven seconds.
Software has also allowed GPU’s to perform their tasks much more quickly. Solving non-graphics problems on the GPU involved treating non graphic data like vertices or pixel data points and using complicated graphics API’s to process the information. Now, someone with a background in ‘C’ can write a program to get around this issue.
Escaping in to the virtual reality of video games, appeals to a growing segment of the population. Providing them with the best user experience drives sales.
Now it also drives medical innovation.