The problem of parallel computing is occupying the minds of a growing number of researchers. Why is this age-old concept so “hot” today? In the first part of this series on Computing Community Consortium (CCC) blogs, David Patterson, Professor in Computer Science at UC Berkeley, gave his thoughts, and the rationale for increased government funding to solve the multicore challenge. Here is an article — the second in a series of opinion pieces –Andrew Chien of Intel gave his perspective on the issue, with a particular focus on the challenges facing us in education and funding. In this piece — the third in a series -– Microsoft’s Dan Reed gives us his views on some of the potential benefits of progress in this research area.
“For over thirty years, we have watched the great cycle of innovation defined by the commodity hardware/software ecosystem — faster processors enable software with new features and capabilities that in turn require faster processors, which beget new software. The great wheel has turned, but it no more, as power constraints and device physics now limit the performance achievable with single microprocessors. Multicore chips — those with multiple, lower power processors per chip — are now the norm. Moreover, current multicore chips (those with 4-8 cores/chip) are but the beginning. We can expect hundreds of cores per chip in the future, with diverse functionality (graphics, packet protocol processing, DSP, cryptography and other features).
The software research challenge is clear — developing effective programming abstractions and tools that hide the diversity of multicore chips and features while exploiting their performance for important applications. Hence, we need a vibrant community of researchers exploring diverse approaches to parallel programming — languages, libraries, compilers, tools — and their applicability to multiple application domains.”