by Kelvin Nilsen, Atego, EETimes
In this first part in a three part series on the use of the Java parallel programming language as the main method of doing multicore software development or or as an adjunct to traditional sequential C and C++ methodologies, Atego’s Kelvin Nilsen surveys some of the special issues that must be addressed when writing software for multiprocessor hardware and how Java can ease the transition.
As semiconductor manufacturers continue to shrink silicon circuit sizes, computer engineers are able to pack more sophisticated logic and larger caches onto each chip. Over the years, these transistors have been used to increase typical data path size from 4 bits to 64 bits, add math coprocessors and specialized math instructions, implement multiple instruction dispatch, dynamic instruction scheduling, deep pipelines, superscalar operation, speculative execution, and branch prediction. They have also been used to expand the sizes of memory caches. Most of these hardware optimizations have served to automatically identify and exploit opportunities for parallel execution that is inherent in a single application’s instruction stream.