Parallel Processing Computer Architectures - course announcement

anne@hing.lcs.mit.edu (Anne McCarthy)
Fri, 7 May 1993 18:02:25 GMT

          From comp.compilers

Related articles
Parallel Processing Computer Architectures - course announcement anne@hing.lcs.mit.edu (1993-05-07)
| List of all articles for this month |

Newsgroups: comp.parallel,comp.compilers
From: anne@hing.lcs.mit.edu (Anne McCarthy)
Summary: MIT Summer Session Course 6.48s
Keywords: architecture, parallel, courses
Organization: MIT Laboratory for Computer Science
Date: Fri, 7 May 1993 18:02:25 GMT

Parallel Processing Computer Architectures
or
How to Program, Design, and Evaluate
Parallel Supercomputers


Monday, July 26 through Friday, July 30, 1993
M.I.T. - Summer session course 6.48s


-------------------------------------
Course Abstract


Parallel supercomputers have recently begun to gain acceptance as a
cost-effective alternative to sequential computers when tackling large
compute-intensive problems, and parallel processing has become the current
hot topic in computer research. The Supercomputing'91 and '92 conferences
saw the unveiling of a host of parallel supercomputers from Cray, KSR,
Thinking Machines, Alliant, Intel, Motorola, and others. Smaller parallel
computers from Silicon Graphics, Encore, Sequent, DEC, IBM, have been in
the marketplace for many years. As we gain experience using these
machines, it is becoming increasing clear that it takes much more than a
purchase order to make use of their potential power. This year, based on
feedback from previous participants, the course will include detailed
discussions and comparisons of commercial parallel machines.


Using examples from commercially available supercomputers, such as the
KSR1, the Thinking Machines CM-5, and the Intel Delta, this course
addresses fundamental issues raised at all levels in using and designing
parallel supercomputers, including software and hardware. Software
components discussed include applications programming, languages,
compilation, and the runtime system. Hardware components discussed
include interconnection networks, memory systems, and processors. Each of
these components is complicated by new issues raised in the multiprocessor
environment.


The above issues will be discussed using cost-performance and programming
effort as the fundamental metrics. We believe raw MIPS as a measure of
computer performance is inadequate and programming effort is as important
as program running time.


The two major themes of the course are managing communication locality and
designing balanced systems. The fundamental performance-limiting issue in
large multiprocessors is communication. The way to alleviate the
communications problem is to make use of LOCALITY, both in space and in
time. Attention to locality must be paid at all levels of the system; a
non-local algorithm won't run well on any parallel machine, but even the
most local algorithm won't run well if the language obscures its locality,
or if the compiler, the runtime system, or the underlying hardware cannot
exploit that locality.


In any good piece of engineering, the subcomponents of a large system are
well matched to each other. For example, improving processor performance
is useless if the communications network doesn't provide the bandwidth
necessary to keep the processors busy. Similarly, improving network
bandwidth (without improving best-case latency) won't help if processors
can't issue network requests fast enough to use the increased bandwidth
effectively. Therefore, this course focuses on system-wide design
tradeoffs between all major subsystems including both hardware and
software.


------------------------------------------------------------------------


Target audience


As the title suggests, this course is for engineers and scientists
interested in programming, designing, implementing, or evaluating parallel
machines. Due to the emphasis this year on commercial parallel machines,
this course would be particularly valuable to managers who are interested
in buying a commercial machine. The course will cover a wide range of
topics in parallel applications, computer architecture, and programming.
Familiarity with the basic concepts of computer architecture, and
knowledge of a programming language such as C or FORTRAN is expected.
Although not required, experience with parallel machines or applications
would be an asset.


>From our past experience, an audience with diverse backgrounds is
expected; At offerings of this course at MIT, and in its previous
offerings as a Summer Course in 1990, 1991, and 1992, the audience was a
mix from a wide range of disciplines, including electrical engineering and
computer science, mechanical engineering, ocean engineering, physics, and
structural engineering. Participants' interests ranged from parallel
programming, operating systems, and compilers, to hardware, circuits, and
packaging.


---------------------------------------------------------------------


Course Outline


The desire to efficiently solve important problems drives parallel
computer design. Consequently, after a course introduction and overview,
we will discuss several typical applications classes, and focus on one or
more of them to concretely illustrate the tradeoffs in programming or
evaluating various machines. We will then look at programming models that
parallel machines typically support and investigate their suitability for
the various applications. We then take a quick detour into
instrumentation, covering several methods for evaluating the performance
of parallel machines, before delving into design.


The latter part of the course will explore design issues for various
parallel machine components. After reviewing currently available
technology and its impact on architectural choices, we will discuss
interconnection networks, caches and memory systems, processors, runtime
software, and compilers. We will wrap up with a case study of the ALEWIFE
multiprocessor, highlighting practical issues and design tradeoffs.


Introduction
        - What you should know if you're involved with parallel processing
        - Driving forces
- Applications provide wish lists
- Technology imposes constraints on what can be achieved
        - The importance of the "systems approach"


What's Important - the Basic Issues
        - Communications
        - Synchronization
        - How various computing styles arise
        - Computing styles of commercial machines


Applications and Algorithms
        - Continuum, particle, graph
        - Parallelizing algorithms


Computational Models and Commercial Machines
        - What's in a computing model
        - Shared memory (KSR1)
        - Message passing (Intel Delta)
        - Data parallel (CM-5)
        - Dataflow


Performance Analysis Methods
        - Getting data, address tracing
        - Analytical modeling
        - Simulation


Designing Parallel Machines
        - Whirlwind tour of technology
        - Interconnection Networks
- Direct mesh and cube networks
- Indirect Omega networks
- Bandwidth and latency
        - Memory Systems
- Scalable memory organizations
- Caches and the coherence problem
- Snooping, directories
        - Processor architectures
        - Compilation Technology
- Partitioning
- Placement
        - Runtime System
- Scheduling
        - Putting it All Together -- A Case Study
- The MIT ALEWIFE multiprocessor system


---------------------------------------------------------------------


Laboratory


This course will include a substantial laboratory. Course participants
will conduct experiments with multiprocessor simulators that we will
provide. Attendees will also write programs in parallel C. We are
hoping that a Thinking Machines CM5 will become available for hands-on
experience with a real multiprocessor.


---------------------------------------------------------------------


Instructors


The course will be taught by Professor Anant Agarwal from the MIT
Laboratory for Computer Science in the Department of Electrical
Engineering and Computer science. Professor Agarwal is the leader of
the ALEWIFE project designing and building a large-scale
multiprocessor. For the laboratory projects, we will have the
assistance of graduate students.


---------------------------------------------------------------------


FOR MORE INFORMATION:


For a more detailed brochure (including application forms and information
about housing and fees), please contact:


Anne McCarthy
Room 624
MIT Laboratory for Computer Science
545 Technology Square, Cambridge, MA 02139, USA


anne@lcs.mit.edu Tel: (617) 253-2629
Fax: (617) 253-7359
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.