Top of the page

The challenge of programming for multicore processors

Categories:

Multicore microprocessors, which emerged after CPU manufacturers hit the power wall and could not sufficiently cool the ever-shrinking transistors on single-core processors, hold a lot of promise for the future of computing. But one major obstacle prevents them from revolutionizing CPU speeds: the challenge of programming for parallel and multicore computing.

In a fascinating article called ""The Trouble with Multicore"" on IEEE Spectrum, David Patteron describes the history of how processor production shifted from single-core to multicore and the unfulfilled promises of the latter technology. Patterson refers to the advent of multicore processors as a ""Hail Mary pass"" on the part of the integrated circuit manufacturers when Moore's Law, which accurately described since the 1980s that CPU speeds would double every 1.5 years, hit the power wall described above.

At that point, Intel and AMD began building processors with multiple cores and Moore's Law, at least the part about the number of transistors in a single processor, continued to hold true. Multicore processors, in rough terms, are several processors bundled into one. So instead of measuring the progress of CPU technology solely in terms of transistors counts and clock rates, now core counts have become equally important for increasing computing performance.

But their promise is limited by software, not hardware. Programming for parallel computing, as Patterson describes, has been one of the greatest challenges in computer science for decades. No computer language created can effectively handle a diverse amount of parallel applications. Patterson writes, ""It's much easier to parallelize programs that deal with lots of users doing pretty much the same thing rather than a single user doing something very complicated. That's because you can readily take advantage of the inherent task-level parallelism of the problem at hand."" So, programming for parallel computing (which is a requirement for full utilization of multicore processors) has only been successful in a few heavily-funded applications, such as bank ATM software, online tracking of airline ticketing, computer graphics, GPUs, and scientific computing. No general language or method has been found to apply to them all.

The moral of the story is that software is now one of the major bottlenecks in improving CPU performance. As Patterson forcefully notes after contrasting programming for single-core (which was relatively simple, given that increased transistor counts guaranteed programs ran faster no matter how they were written) and multicore processors, ""The La-Z-Boy era of program performance is now officially over, so programmers who care about performance must get up off their recliners and start making their programs parallel.""

The challenge of programming for multicore processors

Categories:

Multicore microprocessors, which emerged after CPU manufacturers hit the power wall and could not sufficiently cool the ever-shrinking transistors on single-core processors, hold a lot of promise for the future of computing. But one major obstacle prevents them from revolutionizing CPU speeds: the challenge of programming for parallel and multicore computing.

In a fascinating article called ""The Trouble with Multicore"" on IEEE Spectrum, David Patteron describes the history of how processor production shifted from single-core to multicore and the unfulfilled promises of the latter technology. Patterson refers to the advent of multicore processors as a ""Hail Mary pass"" on the part of the integrated circuit manufacturers when Moore's Law, which accurately described since the 1980s that CPU speeds would double every 1.5 years, hit the power wall described above.

At that point, Intel and AMD began building processors with multiple cores and Moore's Law, at least the part about the number of transistors in a single processor, continued to hold true. Multicore processors, in rough terms, are several processors bundled into one. So instead of measuring the progress of CPU technology solely in terms of transistors counts and clock rates, now core counts have become equally important for increasing computing performance.

But their promise is limited by software, not hardware. Programming for parallel computing, as Patterson describes, has been one of the greatest challenges in computer science for decades. No computer language created can effectively handle a diverse amount of parallel applications. Patterson writes, ""It's much easier to parallelize programs that deal with lots of users doing pretty much the same thing rather than a single user doing something very complicated. That's because you can readily take advantage of the inherent task-level parallelism of the problem at hand."" So, programming for parallel computing (which is a requirement for full utilization of multicore processors) has only been successful in a few heavily-funded applications, such as bank ATM software, online tracking of airline ticketing, computer graphics, GPUs, and scientific computing. No general language or method has been found to apply to them all.

The moral of the story is that software is now one of the major bottlenecks in improving CPU performance. As Patterson forcefully notes after contrasting programming for single-core (which was relatively simple, given that increased transistor counts guaranteed programs ran faster no matter how they were written) and multicore processors, ""The La-Z-Boy era of program performance is now officially over, so programmers who care about performance must get up off their recliners and start making their programs parallel.""