Related articles |
---|
Designing a language for dataflow/parallelism tony@tonyRobinson.com (2005-06-13) |
Re: Designing a language for dataflow/parallelism torbenm@diku.dk (2005-06-16) |
Re: Designing a language for dataflow/parallelism nmm1@cus.cam.ac.uk (2005-06-16) |
Re: Designing a language for dataflow/parallelism mailbox@dmitry-kazakov.de (Dmitry A. Kazakov) (2005-06-16) |
Re: Designing a language for dataflow/parallelism rand@rice.edu (Randy) (2005-06-18) |
Re: Designing a language for dataflow/parallelism gneuner2@comcast.net (George Neuner) (2005-06-18) |
Re: Designing a language for dataflow/parallelism peteg42@gmail.com (Peter Gammie) (2005-06-19) |
Re: Designing a language for dataflow/parallelism gneuner2@comcast.net (George Neuner) (2005-06-21) |
Designing a language for dataflow/parallelism peteg42@gmail.com (Peter Gammie) (2005-06-26) |
From: | Peter Gammie <peteg42@gmail.com> |
Newsgroups: | comp.compilers |
Date: | 19 Jun 2005 11:07:43 -0400 |
Organization: | Compilers Central |
References: | 05-06-081 05-06-093 |
Keywords: | dataflow, parallel |
Posted-Date: | 19 Jun 2005 11:07:43 EDT |
On 19/06/2005, at 1:20 PM, George Neuner wrote:
> [..]
> For implementation ideas relevant to dataflow I guess I would look at
> things like Linda, Occam, MultiLisp and Mul-T, OPS-5, Prolog, Haskell
> and Erlang. None of these are dataflow languages per se, but they are
> easily adaptable to the model and you might get some clues from
> looking at them.
Just to add two pointers to George's list:
- There is a body of work building on Wadge's Lucid:
http://i.csc.uvic.ca/home/hei/hei.ise%3Ctop:lucid%3E
including Lustre and Lucid Synchrone. My understanding is that these
languages are intended to be compiled into sequential code, i.e.
existing implementations resolve all concurrency at compile-time. If
you simplify these models by throwing out the clock calculus, you end
up with Claessan / Sheeran / Singh's Lava hardware description
language, which I think is a good place to start.
- There was a lot of work done by people at MIT in the 80s on
dataflow hardware and what language support should look like. Google
for Arvind or "MIT dataflow". Here the goal is to recover the
parallelism implicit in a program and exploit locality to avoid the
von Neumann bottleneck. I half-remember hearing their ideas made it
into one of the Alpha chips (but someone correct me if they know the
story, please).
You may also care to look at what Apple's doing with CoreAudio,
CoreVideo and Core<somethingelse>. In brief: all these are pipelined
(== dataflow-oriented) rendering libraries that use some tricks to
speed things up, such as demand-driven computation (aka lazy
evaluation).
cheers
peter
Return to the
comp.compilers page.
Search the
comp.compilers archives again.