The problem of parallel computing is occupying the minds of a growing number of researchers. Why is this age-old concept so “hot” today? In the first part of this series on Computing Community Consortium (CCC) blogs, David Patterson, Professor in Computer Science at UC Berkeley, gave his thoughts, and the rationale for increased government funding to solve the multicore challenge. Here is an article — the second in a series of opinion pieces –Andrew Chien, Vice President and Corporate Technology Group Director for Intel Research, gives his perspective on the issue, with a particular focus on the challenges facing us in education and funding.
Multicore (parallelism) represents a fundamental challenge and change for all of computing and computer science. It represents the fundamental constraints of physics — nature loves parallelism — surfacing and interacting with some fundamental tenets of computing. We have formulated our theory of computation and complexity primarily on sequence — in control and state. Fundamental physics (and consequently circuits and architecture) which makes parallelism fundamentally cheaper is now challenging us to broaden the foundation of computing with parallelism as a first class element. I believe that as a research community, this is a first-order challenge to respond — in nearly all disciplines of computer science.
Now, let me turn to research funding in parallelism — which is a critical need in all areas from architecture, run times, compilers, programming languages, algorithms, and theory. We need major increases in funding and research activity in all of these areas. Governments must take the primary role in funding research in information technology for the long term economic development and societal well-being. We would like to see aggressive large-scale funding of long-range research in parallelism, and that the fruits of that research be made broadly available for commercialization.