|Reference Works wanted: compiler techniques for parallel hardware? email@example.com (Ray Dillinger) (2005-07-02)|
|Re: Reference Works wanted: compiler techniques for parallel hardware? firstname.lastname@example.org (George Neuner) (2005-07-05)|
|Re: Reference Works wanted: compiler techniques for parallel hardware? email@example.com (dz) (2005-07-11)|
|From:||Ray Dillinger <firstname.lastname@example.org>|
|Date:||2 Jul 2005 20:21:52 -0400|
Are there any well-developed strategies for developing languages that
take advantage of parallel hardware?
When people want to take advantage of DSP's or Graphics coprocessors
that do a lot of parallel and/or dataflow stuff, it seems like they
almost always wind up going down to the bare wires and coding in
assembler. IOW, the compiler writers so far have been failing these
people. And it's hard to imagine language constructs that allow such
things to be used in a general and efficient way.
So, I'm looking for reference works; have any books been written about
the design of languages to support parallel and dataflow operations on
massively concurrent hardware like DSP's, or about how compilers for
them are implemented? It seems like code generation would become a
[It is my impression that DSPs and graphics processors are quirky enough
that it's difficult to come up with code generation strategies that
produce code you want to work. For conventional vector and parallel
machines, there's plenty of work that compiles Fortran and C, perhaps
with some hints in the source code. -John]
Return to the
Search the comp.compilers archives again.