|Parallelizing C/C++ code email@example.com (Raphael Fonte Boa) (2010-04-29)|
|Re: Parallelizing C/C++ code firstname.lastname@example.org (russell kym horsell) (2010-05-01)|
|Re: Parallelizing C/C++ code email@example.com (Raphael Fonte Boa) (2010-05-03)|
|Re: Parallelizing C/C++ code firstname.lastname@example.org (russell kym horsell) (2010-05-06)|
|Re: Parallelizing C/C++ code email@example.com (George Neuner) (2010-05-07)|
|Re: Parallelizing C/C++ code firstname.lastname@example.org (Randy Crawford) (2010-05-14)|
|Re: Parallelizing C/C++ code email@example.com (kamal) (2010-05-27)|
|From:||George Neuner <firstname.lastname@example.org>|
|Date:||Fri, 07 May 2010 03:02:52 -0400|
|Organization:||A noiseless patient Spider|
|Keywords:||parallel, C , bibliography|
|Posted-Date:||09 May 2010 12:21:26 EDT|
On Thu, 29 Apr 2010 07:17:06 -0700 (PDT), Raphael Fonte Boa
>I have been searching for information about Compiler approaches to
>parallelization of code. I haven't much information about it. I have
>looked into SUIF, CIL and Daedalus but I don't know if they have
>enough results to be used as an actual parallelization platform. Will
>the approach be dependent on the MoC like in Daedalus?
I assume that you are asking about function and task level parallelism
because compilers do already exploit parallelism at the instruction
and trace level, most now support some array data parallelism by
vectorization (using SIMD registers), and some can split array ops
across multiple CPUs by threading (also called "striping").
Information on instruction scheduling, loop and array parallel
optimizations are pretty easy to find.
[btw: I'm using "task" very broadly to refer to any computation that
might be done concurrently. In particular a "task" need not be a
visible source level construct.]
>I am new to the area
But not new to compilers I hope.
>and would like to have some feedback from you about what is actually
>useful and what has already been dropped by the community.
I'm not aware of any approaches that have been abandoned ... but apart
from annotation driven strategies (e.g., OpenMP), most approaches to
task parallelism have innate language dependencies.
Functional, dataflow and constraint logic languages all have features
that make parallelism either apparent or easy to identify. In FP
languages you find things like binding variables immutable data, 1st
class closures, garbage collection, etc. Logic programs are
collections of conditional rules over an associative memory, the rules
can be executed whenever their pre-conditions are met. Dataflow
programs are (conceptually at least) a collection of asynchronous
threads which communicate by rendezvous message passing, but dataflow
languages try to hide the threading and synchronization from the
programmer. Some logic and dataflow languages also provide immutable
In general it is easier to extract parallelism from programs in these
kinds of languages than from general procedural languages like C and
[Most existing dataflow languages are either procedural or
procedural/functional, but their different syntax makes identifying
parallelism easier vs C.]
>The intention is to get some kind of parallelism analysis out of
I'll assume you can transliterate ideas about Fortran to ideas about
C. There has been much more work on parallelization of Fortran.
A few things to get you started:
http://www.ece.lsu.edu/jxr/pluto/index.html - describes Pluto, an open
source, source->source C parallelization tool. Includes links to the
tool and to technical papers about its operation.
"Detecting Parallelism in C Programs with Recursive Data Structures"
"On extracting coarse-grained function parallelism from C programs"
"How Much Parallelism is There in Irregular Applications?"
"Finding synchronization-free parallelism represented
with trees of dependent operations"
http://cnx.org/content/m32782/latest/ - this is a simple but good
primer on loop dependencies.
A CS lecture on statement level parallelism. There's a whole course
here but I haven't looked at most of it.
You'd also do well to get a good book on optimizing compilers for the
basics of parallel optimization. I happen to like:
Randy Allen & Ken Kennedy, "Optimizing Compilers for Modern
Architectures: A Dependence-based Approach", 2002, Morgan Kaufmann,
Return to the
Search the comp.compilers archives again.