Re: Designing a language for dataflow/parallelism

George Neuner <gneuner2@comcast.net>
21 Jun 2005 13:51:51 -0400

          From comp.compilers

Related articles
Designing a language for dataflow/parallelism tony@tonyRobinson.com (2005-06-13)
Re: Designing a language for dataflow/parallelism torbenm@diku.dk (2005-06-16)
Re: Designing a language for dataflow/parallelism nmm1@cus.cam.ac.uk (2005-06-16)
Re: Designing a language for dataflow/parallelism mailbox@dmitry-kazakov.de (Dmitry A. Kazakov) (2005-06-16)
Re: Designing a language for dataflow/parallelism rand@rice.edu (Randy) (2005-06-18)
Re: Designing a language for dataflow/parallelism gneuner2@comcast.net (George Neuner) (2005-06-18)
Re: Designing a language for dataflow/parallelism peteg42@gmail.com (Peter Gammie) (2005-06-19)
Re: Designing a language for dataflow/parallelism gneuner2@comcast.net (George Neuner) (2005-06-21)
Designing a language for dataflow/parallelism peteg42@gmail.com (Peter Gammie) (2005-06-26)
| List of all articles for this month |

From: George Neuner <gneuner2@comcast.net>
Newsgroups: comp.compilers
Date: 21 Jun 2005 13:51:51 -0400
Organization: Compilers Central
References: 05-06-081 05-06-093 05-06-096
Keywords: parallel
Posted-Date: 21 Jun 2005 13:51:51 EDT

On 19 Jun 2005 11:07:43 -0400, Peter Gammie <peteg42@gmail.com> wrote:


>- There is a body of work building on Wadge's Lucid:
>
>http://i.csc.uvic.ca/home/hei/hei.ise%3Ctop:lucid%3E
>
>including Lustre and Lucid Synchrone. My understanding is that these
>languages are intended to be compiled into sequential code, i.e.
>existing implementations resolve all concurrency at compile-time. If
>you simplify these models by throwing out the clock calculus, you end
>up with Claessan / Sheeran / Singh's Lava hardware description
>language, which I think is a good place to start.


Lucid looks quite interesting, but it seems to be based on a systolic
processing model - the clocking, though implicit, is visible to the
programmer and if you remove it, everything falls apart.


As I understood it (a long time ago, mind you) the dataflow model was
based on asynchronous logic with registers, conceived originally as a
method to aggressively exploit inherent parallelism in programs and to
transparently scale programs to the expected [massively] parallel
hardware. It was a low level, run time model meant to be suitable as
a common base for language implementation.




Dataflow was supposed to exploit available parallelism wherever
possible in a program. At the run time level, variables were "pipes"
or "wires" linking functions and all functions were conceptually
executed concurrently, blocking if input was not available. Keep in
mind that "functions" in the dataflow execution model were intended to
be *low*level* operations directly executable in hardware - not user
written functions.


As I said in a previous post, the dominant execution model was
bag-resolution, usually combined with a graph walker. In one
experimental system I am aware of, the compiler produced a graph of
function templates. The run time system walked the graph generating
descriptors for linkage variables (the pipes between functions) and
"instanced" functions bound to specific linkage variables. These
descriptors and static data, also tagged with descriptors, were placed
in a "bag" to be munched on by a resolution engine. The engine
matched up descriptors and data, dispatching functions for execution
when their inputs were available and advancing the program along any
path possible until either all paths blocked or the program
terminated. Linkage variables were single use temporaries, consumed
by the functions that referenced them. Data produced as output was
tagged with a descriptor and and thrown back in the bag. I don't
recall exactly how loops were handled but I think the resolution
engine recognized loops and the graph walker could generate new
instances of the loop's functions, along with new linkage variables,
on demand.


Anyway, that is what I think of when I hear "dataflow". It may be out
of date or totally wrong ... but there it is.


Though I'm not aware of any modern language whose run time model
embodies all of the above, many employ significant parts of it. The
FPLs as a group are the closest, most being based on graph
traversal/reduction and templated generics and many supporting pattern
matched execution - although they reverse the control, driving
execution from the graph rather than from pattern matching.


However, grokking the guts of an established FPL, e.g., Haskell, is
not very easy and the seeds are also present in many other places.
For instance, Occam is a relatively simple procedural language, but it
offers parallel proceses and allows variable use as typed,
unidirectional pipes between processes. Linda and its offspring are
coordination languages that use [now virtual] bag and pattern matching
as their primary interprocess communication method - they offer
prototypical examples of asynchronous operation and using data to
section programs for parallel execution. OPS-5 and Prolog use pattern
matching as their primary method of advancing execution.




George


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.