Designing a language for dataflow/parallelism

Peter Gammie <peteg42@gmail.com>
26 Jun 2005 11:21:30 -0400

          From comp.compilers

Related articles
[2 earlier articles]
Re: Designing a language for dataflow/parallelism nmm1@cus.cam.ac.uk (2005-06-16)
Re: Designing a language for dataflow/parallelism mailbox@dmitry-kazakov.de (Dmitry A. Kazakov) (2005-06-16)
Re: Designing a language for dataflow/parallelism rand@rice.edu (Randy) (2005-06-18)
Re: Designing a language for dataflow/parallelism gneuner2@comcast.net (George Neuner) (2005-06-18)
Re: Designing a language for dataflow/parallelism peteg42@gmail.com (Peter Gammie) (2005-06-19)
Re: Designing a language for dataflow/parallelism gneuner2@comcast.net (George Neuner) (2005-06-21)
Designing a language for dataflow/parallelism peteg42@gmail.com (Peter Gammie) (2005-06-26)
Looking for a fast C++ parser nscc@c7.org (2005-08-07)
Re: Looking for a fast C++ parser snicol@apk.net (Scott Nicol) (2005-08-10)
Re: Looking for a fast C++ parser vidar.hokstad@gmail.com (Vidar Hokstad) (2005-08-10)
Re: programming with VHDL, was Looking for a fast C++ parser emailamit@gmail.com (Amit Gupta) (2005-08-10)
Re: programming with VHDL, was Looking for a fast C++ parser me_ncl@hotmail.com (Martin Ellis) (2005-08-13)
Re: Looking for a fast C++ parser firefly@diku.dk (Peter \Firefly\Lund) (2005-08-13)
[6 later articles]
| List of all articles for this month |
From: Peter Gammie <peteg42@gmail.com>
Newsgroups: comp.compilers
Date: 26 Jun 2005 11:21:30 -0400
Organization: Compilers Central
References: 05-06-081 05-06-093 05-06-096 05-06-098
Keywords: parallel
Posted-Date: 26 Jun 2005 11:21:30 EDT

George,


On 21 Jun 2005 13:51:51 -0400, George Neuner <gneuner2@comcast.net>
wrote:


> Lucid looks quite interesting, but it seems to be based on a systolic
> processing model - the clocking, though implicit, is visible to the
> programmer and if you remove it, everything falls apart.


I'm not too familiar with Lucid, but all the recent synchronous
dataflow languages have explicit clocks. The idea is to provide nice
support for programming reactive systems, which need to be
finite-state in order to keep reacting. The clocks ensure that buffers
have fixed static lengths and that there is no deadlock, so one can
statically schedule things and make it all quite efficient.


> As I understood it (a long time ago, mind you) the dataflow model was
> based on asynchronous logic with registers, conceived originally as a
> method to aggressively exploit inherent parallelism in programs and to
> transparently scale programs to the expected [massively] parallel
> hardware. It was a low level, run time model meant to be suitable as
> a common base for language implementation.


This was my second reference, to the work done at MIT. I am (even)
less familiar with it.


Just for the record (or Google), some people at Berkeley talk about
"dataflow process networks" which are perhaps closer to what you have
in mind.


One small nit:


> Though I'm not aware of any modern language whose run time model
> embodies all of the above, many employ significant parts of it. The
> FPLs as a group are the closest, most being based on graph
> traversal/reduction and templated generics and many supporting pattern
> matched execution - although they reverse the control, driving
> execution from the graph rather than from pattern matching.


The thing that drives computation in Haskell (the purportedly
canonical lazy FP) is the need to know what something evaluates to,
and this need arises from the desire to know which way to branch in a
"case" (pattern matching) expression... so you could say it is driven
by pattern matching. [*] I might be misunderstanding you, though.


I don't think strict functional languages need to go to the trouble of
using graph reduction, but perhaps someone else should comment on
that.


cheers,
Peter.


[*] This is not explicitly in the standard as far as I know, but is
implied in various places.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.