|Re: compiling for parallel machines firstname.lastname@example.org (1989-08-04)|
|Re: compiling for parallel machines email@example.com (1989-08-05)|
|Re: compiling for parallel machines firstname.lastname@example.org (1989-08-06)|
|From:||email@example.com (James Salsman)|
|Date:||6 Aug 89 22:12:50 GMT|
In article <1989Aug6.firstname.lastname@example.org> email@example.com (Eric S. Raymond) writes:
> The software end is nowhere near this well-evolved. There are formalisms
> like CSP that allow us to discuss the behavior of parallel systems in a
> rigorous way, but no one has even come up with a convincing general attack on
> the higher-level problem -- automatic parallelization of algorithms expressed
> for a uniprocessor abstract machine in real languages (i.e. in the presence
> of multiple assignment, side-effects and data aliasing).
> There is the really *hard* problem!
Automatic parallelization under the conditions you describe is SO
difficult that the resulting code usually isn't sped up enough to overcome
the practical problems of the added cost of the parallel hardware and the
time spent doing the compile.
At CMU, the Ergo project is working on the compilation of languages into
combinator graphs. The idea being that combinator graphs may be
Combinators are related to the type-free lambda calculus, so in practice
the combinator graph reduction engine must suffer the efficiency blow of
some sort of type-tag system. In addition, combinator graphs must not
have local state (assignment or destructive data structure operations) if
they are to be parallelized. This is a *BIG* lose.
I think the real solution is to start the wide-spread instruction of
parallel programming languages such as Occam, Linda, and *Lisp at the
Disclaimer: I have nothing to do with the Ergo project.
:James P. Salsman (jps@CAT.CMU.EDU)
Return to the
Search the comp.compilers archives again.