|Course Announcement: Dataflow Architectures and Languages at MIT email@example.com (1990-06-07)|
|From:||firstname.lastname@example.org (Shail Aditya Gupta)|
|Date:||Thu, 7 Jun 90 01:13:23 GMT|
|Organization:||MIT Lab for Computer Science, Cambridge, Mass.|
Enclosed is an announcement for a 1-week course on Dataflow
Architectures and Languages that we will be teaching at MIT this
summer. It may be of interest to you or to your colleagues. Thank
Arvind and Rishiyur S. Nikhil, MIT
---------------- COURSE ANNOUNCEMENT: PLEASE POST ----------------
Parallel Computing: Dataflow Architectures and Languages
(with Programming Laboratory)
Monday, July 23 through Friday, July 27, 1990
Massachusetts Institute of Technology
Summer Session Program 6.83s
Parallel computing is faced with both a programming crisis and an
architectural crisis, although the latter is not widely recognized. This
course presents approaches to both issues--- the former via Functional
Languages and the latter via Dataflow Architectures.
In any highly parallel machine, large memory latencies and frequent
synchronization events are unavoidable. To avoid idling during a long-latency
request, it is necessary for the processor to rapidly switch to another thread
of control. This, in turn, implies that: (a) each processor should have a
sufficient supply of threads to keep it busy, (b) thread-switching should be
extremely cheap, and (c) there should be a fast association of memory
responses with waiting threads (responses may not arrive in order). These are
in fact also exactly the capabilities necessary for fast, cheap
synchronization, and are not addressed satisfactorily in most current and
proposed architectures (the HEP and Horizon are notable exceptions).
Dataflow machines address these issues directly. Instruction-level forks and
joins (so machine code is a dataflow graph) make it possible for each
processor to have thousands of threads ready to execute at each instant.
Thread-scheduling is data-driven--- an instruction is never attempted until
its data are known to be available. All long-latency requests (including
remote memory reads) are ``split-phase'' transactions: a request carries with
it a thread descriptor which is returned to the processor along with the
response. Thus, out-of-order responses are not a problem, and threads may be
freely detained at memory locations where the data are not yet ready. These
architectural principles are universal--- they are not specific to any
particular language model.
Several dataflow machines being built today. Today's dataflow architectures
borrow much from traditional architectures, but they have the most aggressive
approach to multi-threading.
How can we utilize a highly parallel machine effectively? Automatic discovery
of sufficient parallelism in sequential programs does not appear to be viable.
A major problem with many parallel languages is non-determinacy--- the user
must insert adequate synchronization to avoid read-write races. Further, it
can be quite impractical to partition, schedule and synchronize the numerous
small processes necessary in a machine with hundreds or thousands of
A more ambitious approach is to write programs in a functional language, where
all parallelism is implicit. All arguments of a function may be evaluated
concurrently (in fact, even concurrently with the function body), and
producers and consumers of data structures may be executed concurrently. When
systematically exploited, this level of parallelism is practically impossible
to obtain in any other system.
Two major criticisms of functional languages have centered around the lack of
efficient array manipulation and the inability to express non-deterministic
access to shared resources. The language Id, at MIT, addresses these issues
using two innovative concepts called I-structures and Managers, respectively.
Outline of the Course:
Programming with higher-order functions and non-strict data
structures; Rewrite rules and reduction; Algebraic and abstract data
types; Arrays and I-structures; Resource management programs.
Fundamental issues in high-performance parallel architectures;
Dataflow instruction execution mechanism; Static and dynamic dataflow
machines; I-structure memory; Multi-threaded architectures; Hybrid von
Dataflow program graphs; Translation to dataflow graphs;
Lambda-lifting and supercombinators; Loop, array and procedure call
optimization; Data-driven and demand-driven evaluation.
Resource management and performance:
Resource managers; Experimental results on MIT dataflow machines.
Morning and afternoon lecture sessions will be followed by late-afternoon
laboratory sessions in writing, debugging, running and analyzing the
performance of Id programs on a real dataflow machine and on software
The Target Audience:
Understanding dataflow principles can benefit users and designers of
all parallel systems, i.e., parallel languages, architectures,
compilers and resource managers. In addition to computer scientists
and electrical engineers, the course is also useful for people working
in scientific programming, signal processing, real-time computing and
The tuition charge ($1,500) covers all program materials including
notes, reprints of important publications, and lecture transparencies.
Academic credit is not offered.
The program will be taught by professors Arvind and R.S.Nikhil of
the MIT Department of Electrical Engineering and Computer Science.
Experienced assistants will be available in the laboratory.
FOR MORE INFORMATION:
For a more detailed brochure (including application forms and
information about housing and fees), please contact:
Professor Arvind or Professor R.S.Nikhil,
MIT Laboratory for Computer Science
545 Technology Square, Cambridge, MA 02139, USA
email@example.com Tel: (617)-253-6090
firstname.lastname@example.org Tel: (617)-253-0237
Return to the
Search the comp.compilers archives again.