Advance Program: Massively Parallel Computation, Frontiers '95

havlak@cs.umd.edu (Paul Havlak)
Thu, 10 Nov 1994 01:51:01 GMT

          From comp.compilers

Related articles
Advance Program: Massively Parallel Computation, Frontiers '95 havlak@cs.umd.edu (1994-11-10)
| List of all articles for this month |

Newsgroups: comp.compilers
From: havlak@cs.umd.edu (Paul Havlak)
Keywords: conference, parallel, Fortran
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Date: Thu, 10 Nov 1994 01:51:01 GMT

Of particular interest to the comp.compilers community are sessions on
compilers and the minisyposium on High Performance Fortran vendor
perspectives. --Paul




ADVANCE PROGRAM




FRONTIERS 95


The Fifth Symposium On
The Frontiers Of Massively Parallel Computation




February 6-9, 1995
McLean Hilton
McLean, VA




Sponsored by
IEEE Computer Society Technical Committee on Computer Architecture


In Cooperation with
NASA/Goddard Space Flight Center
University of Maryland Institute for Advanced Computer Studies
George Mason University




Frontiers '95 is the fifth in a series of meetings on massively parallel
computation, focusing on research related to systems scalable to many
hundreds of processors. The conference will include 56 original research
papers about aspects of the design, analysis, development and use of
massively parallel computers, along with papers on parallel compilers and
operating systems. Other highlights include five panels, a mini-symposium
on High Performance Fortran, invited speakers, six pre-conference
tutorials, three pre-conference workshops, parallel machine exhibits, a
poster session reception, and conference banquet.


Complete information about the symposium is available on the World-Wide Web at:


ftp://ftp.cs.umd.edu/pub/hpsl/frontiers95/front95.html


or on anonymous FTP at ftp.cs.umd.edu, file


pub/hpsl/frontiers95/advance.text


Table of Contents (search on capitalized phrases):
--------------------------------------------------


OUTLINE OF SCHEDULE


TECHNICAL PROGRAM (papers, panels, speakers)


TUTORIALS


WORKSHOPS


HOTEL INFORMATION


REGISTRATION FORM


ORGANIZING COMMITTEE




OUTLINE OF SCHEDULE
-------------------
        Sunday, Feb. 5 8:30 am -- 4:30 pm Workshops


        Monday, Feb. 6 8:30 am -- 5:00 pm Tutorials
                                              8:30 am -- 4:30 pm Workshops
                                              4:30 pm -- 5:30 pm Plenary Plenary Session


                                              5:30 pm -- 7:00 pm Welcoming Reception


        Tuesday, Feb. 7 8:45 am -- 5:30 pm Technical Program
                                              6:00 pm Poster Session and Reception


        Wednesday, Feb. 8 9:00 am -- 6:30 pm Technical Program
                                              7:00 pm Banquet and Speaker


        Thursday, Feb. 9 9:00 am -- 3:00 pm Technical Program




TECHNICAL PROGRAM
-----------------


Tuesday, February 7


8:45-9:00 am Session
OPENING REMARKS: Joel Saltz, University of Maryland


9:00-10:00 am Session
SESSION 1: Invited Speaker: Ken Kennedy, Rice University


10:00-10:30 am Break


10:30 am-12 noon Concurrent Sessions
SESSION 2A: Algorithms I


o Efficient Parallelizations of a Competitive Learning Algorithm for Text
    Retrieval on the MasPar, I. Syu, S. D. Lang, K. Hua, University of Central
    Florida


o Parallelization of Two Breadth-First Search-Based Applications Using
    Different Message-Passing Paradigms: An Experimental Evaluation, S. Bae,
    S. Ranka, Syracuse University


o Many-to-many Personalized Communication with Bounded Traffic, S. Ranka,
    R. Shankar, K. Alsabti, Syracuse University


o A Data Parallel Algorithm for Boolean Function Manipulation, S. Gai, M.
    Rebaudengo, M. Sonza Reorda, Politecnico di Torino


SESSION 2B: Minisymposium: HPF Vendor Perspectives
Chair: J. Saltz, University of Maryland
Portland Group
Applied Parallel Research
DEC
Cray Research
Others TBA


12 noon-1:30 pm Conference Luncheon


1:30-3:00 pm Concurrent Sessions
SESSION 3A: Computational Science I


o The Performance Impact of False Subpage Sharing in KSR-1, B. Cukic, F.
    Bastani, University of Houston


o A Multi-Cache Coherence Scheme for Shuffle-Exchange Network Based Shared
    Memory Multiprocessors, R. Omran, D.-L. Lee, York University


o MICA: A Mapped Interconnection-Cached Architecture, Y.-D. Lyuu, E.
    Schenfeld, NEC Research Institute


o Performance Analysis and Optimal System Configuration of Hierarchical
    Two-Level Coma Multiprocessors, T.-L. Hu, F. N. Sibai, University of Akron


SESSION 3B: Panel: Scalable I/O
Chair: P. Messina, Caltech/JPL
K. Kennedy, Rice University
J. Pool, Caltech
M. Snir, IBM
D. Watson, LLNL
Others TBA


3:00-3:30 pm Break


3:30-5:30 pm Concurrent Sessions
SESSION 4A: I/O Related


o Compilation of I/O Communications for HPF, F. Coelho, Ecole des mines de
    Paris


o Compiler Support for Out-of-Core Data Structures on Parallel Machines,
    K. Kennedy, C. Koelbel, M. Paleczny, Rice University


o A Data Management Approach for Handling Large Compressed Arrays in High
    Performance Computing, K. Seamons, M. Winslett, University of Illinois


o Parallel I/O Scalability from the User's Perspective, J. Gotwals, S.
    Srinivas, S. Yang, Indiana University


SESSION 4B: Caches


o A High Performance Sparse Cholesky Factorization Algorithm for Scalable
    Parallel Computers, G. Karypis, V. Kumar, University of Minnesota


o Parallelization and Performance of Three-Dimensional Plasma Simulation,
    V. Gopinath, T. Grotjohn, Y.-K. Chu, D. Rover, Michigan State University




o Parallel Molecular Dynamics: Communication Requirements for Future MPPs,
    V. Taylor, Northwestern University, R. Stevens, K. Arnold, Argonne
    National Laboratory


o Dataparallel Semi-Lagrangian Numerical Weather Forecasting, L. Wolters,
    Leiden University, G. Cats, Royal Netherlands Meteorological Institute, N.
    Gustafsson, T. Wilhelmsson, Swedish Meteorological and Hydrological
    Institute


o On Mapping Data and Computation for Parallel Sparse Cholesky
    Factorization, K. Eswar, C.-H. Huang, P. Sadayappan, The Ohio State
    University


SESSION 4C: Panel: How Can Massively Parallel Computers Help in the
Prediction of Protein Structures? Chair: R. Martino, National Institutes
of Health Panel members TBA


6:00 pm Poster Session and Reception




Wednesday, February 8


9:00-10:00 am Session
SESSION 5: Invited Speaker: Burton Smith, Tera Computer Company


10:00-10:30 am Break


10:30 am-12 noon Concurrent Sessions
  SESSION 6A: Data Parallel


o Work-Efficient Nested Data-Parallelism, D. Palmer, J. Prins, University
    of North Carolina, S. Westfold, Kestrel Institute


o A Data Parallel C and its Platforms, M. Gokhale, J. Schlesinger,
    Supercomputing Research Center


o An Object-Oriented Approach to Nested Data Parallelism, T. Sheffler, S.
    Chatterjee, NASA Ames Research Center


o DPM: Integrating Task and Data Parallelism, E. West, A. Grimshaw,
    University of Virginia


SESSION 6B: Computational Science II


o Optimizing Irregular Computations on a SIMD Machine: A Case Study, J.
    Conery, M. Lynch, T. Hovland, University of Oregon


o Homologous Sequence Searching in Large Genetic Databases Using Parallel
    Computing Methods, T. Yap, National Institutes of Health, O. Frieder,
    George Mason University, R. Martino, National Institutes of Health


o An Optimal Parallel Algorithm for Volume Ray Casting, V. Goel, A.
    Mukherjee, University of Central Florida


o The Performance Impact of Data Placement For Wavelet Decomposition of
    Two Dimensional Image Data on SIMD Machines, A. K. Chan, C. Chui, J.
    LeMoigne, H. J. Lee, J. C. Liu, T. A. El-Ghazawi, Texas A&M University


12 noon-1:30 pm Lunch (on your own)


1:30-2:30 pm Session
SESSION 7: Invited Speaker: Phil Colella, University of California at Berkeley


1:30-3:00 pm Break


3:00-4:30 pm Concurrent Sessions
SESSION 8A: Panel: Parallel C++
Chair: C. Kesselman, California Institute of Technology
Panel Members TBA


SESSION 8B: Architectural Structures


o The Practicality of SIMD for Scientific Computing, J. Fischer, L. Hamet,
    C. Mobarry, J. Pedelty, J. Cohen, R. de Fainchtein, B. Fryxell, P.
    MacNeice, K. Olson, T. Sterling, NASA/Goddard Space Flight Center


o Characteristics of the MasPar Parallel I/O System, T. El-Ghazawi, The
    George Washington University


o Efficient Matrix Operations in a Reconfigurable Array with Spanning
    Optical Buses, C. Qiao, State University of New York at Buffalo


o Introducing MGAP-2, F. Zhou, T. Kelliher, R. Owens, M. J. Irwin, The
    Pennsylvania State University


4:30-5:00 pm Break


5:00-6:30 pm Concurrent Sessions
SESSION 9A: Networks


o Analysis of Communication Costs of Parallel Machines with Various
    Communication Mechanisms, C.-C. Lin, V. Prasanna, University of Southern
    California at Los Angeles


o Spectrum Analysis of Communication Networks on Parallel Computers, Z. G.
    Mou, Brandeis University


o Analysis of Finite Buffered Multistage Interconnection Networks under
    First-Blocked-First-Unblock Conflict Resolution, C. Evequoz, Ecole
    Polytechnic de Montreal


o Periodically Regular Chordal Ring Networks for Massively Parallel
    Architectures, B. Parhami, University of California at Santa Barbara


SESSION 9B: Compilers


o Aligning Parallel Arrays to Reduce Communication, T. Sheffler, R.
    Schreiber, NASA Ames Research Center, J. Gilbert, Xerox Palo Alto Research
    Center, S. Chatterjee, NASA Ames Research Center


o Code Generation for Multiple Mappings, W. Kelly, W. Pugh, E. Rosser,
    University of Maryland


o Automatic Generation of Efficient Array Redistribution Routines for
    Distributed Memory Multicomputers, S. Ramaswamy, P. Banerjee, University
    of Illinois


o Automatic Synchronization Elimination in Synchronous FORALLs, M.
    Philippsen, E. Heinz, University of Karlsruhe


SESSION 9C: Partitioning and Mapping


o A Parallel Graph Partitioner on a Distributed Memory Multiprocessor, P.
    Buch, J. Sanghavi, A. Sangiovanni-Vincentelli, University of California at
    Berkeley


o Parallel Remapping Algorithms for Adaptive Problems, C.-W. Ou, S.Ranka,
    Syracuse University


o On the Influence of the Partitioning Schemes on the Efficiency of
    Overlapping Domain Decomposition Methods, P. Ciarlet, F. Lamour,
    University of California at Los Angeles, B. F. Smith, Argonne National
    Laboratory


o Exploitation of Control Parallelism in Data Parallel Algorithms, V.
    Garg, D. Schimmel, Georgia Institute of Technology


7:00 pm Banquet
    Howard Richmond, Gartner Group


o Leading supercomputing industry analyst will survey the rapidly growing
    commercial use of parallel architectures


Thursday, February 9


9:00-10:00 am Session
SESSION 10: Invited Speaker: John Nickolls, MasPar Computer Corp.


10:00-10:30 am Break


10:30 am-12 noon Concurrent Sessions
SESSION 11A: Tools I


o PERFSIM: A Tool for Automatic Performance Analysis of Data Parallel
    Fortran Programs, S. Toledo, Massachusetts Institute of Technology


o Performance Debugging Based on Scalability Analysis, T. Suzuoka, J.
    Subhlok, T. Gross, Carnegie Mellon University


o ProcSimity: An Experimental Tool for Processor Allocation and Scheduling
    in Highly Parallel Systems, K. Windisch, J. Valenti Miller, V. Lo,
    University of Oregon


o Falcon: On-line Monitoring and Steering of Large-Scale Parallel
    Programs, W. Gu, G. Eisenhauer, E. Kraemer, K. Schwan, J. Stasko, J.
    Vetter, N. Mallavarupu, Georgia Institute of Technology


SESSION 11B: Runtime Systems


o Runtime Support for Data Parallel Tasks, M. Haines, B. Hess, P.
    Mehrotra, J. Van Rosendale, NASA Langley Research Center, H. Zima,
    University of Vienna


o Runtime Support for Execution of Fine Grain Parallel Code on Coarse
    Grain Multiprocessors, R. Neves, R. Schnabel, University of Colorado


o Runtime Support for User-Level Ultra Lightweight Threads on Massively
    Parallel Distributed Memory Machines, W. Shu, State University of New York
    at Buffalo


o Runtime Incremental Parallel Scheduling (RIPS) for Large-Scale Parallel
    Computers, W. Shu and M.-Y. Wu, State University of New York at Buffalo


SESSION 11C: Panel: SIMD Machines: Do They Have a Significant Future?
Chair: H. J. Siegel, Purdue University
Bruce Alper, Cambridge Parallel Processing (DAP)
Ken Batcher, Kent State
Timothy Bridges, Data Parallel Systems, Inc.
Ken Iobst, SRC/Cray Computer Corp.
John Nickolls, MasPar Computer Corp.
Chip Weems, University of Massachusetts


12 noon-1:30 pm Lunch (on your own)


1:30-3:00 pm Concurrent Sessions
SESSION 12A: Tools II


o A Scalable, Visual Interface for Debugging with Event-Based Behavioral
    Abstraction, J. Kundu, University of Massachusetts, J. Cuny, University of
    Oregon


o Visualizing Distributed Data Structures, S. Srinivas, Indiana University


o Migrating from PVM to MPI, part I: The Unify System, P. Vaughan, A.
    Skjellum, D. Reese, F.-C. Cheng, Mississippi State University


o Implementing Multidisciplinary and Multizonal Applications Using MPI, S.
    Fineberg, NASA Ames Research Center


SESSION 12B: Algorithms II


o Time- and VLSI-Optimal Convex Hull Computation on Meshes with Multiple
    Broadcasting, V. Bokka, H. Gurla, S. Olariu, J. L. Schwing, Old Dominion
    University


o Algorithm for Constructing Fault-Tolerant Solutions of the Circulant
    Graph Configuration, A. Farrag, Dalhousie University


o Design and Analysis of Product Networks, A.Youssef, The George
    Washington University


o A Broadcast Algorithm for All-Port Wormhole-Routed Torus Networks, Y.-j.
    Tsai, P. McKinley, Michigan State University


SESSION 12C: Panel: Embedded Systems
Chair: D. Schaefer, George Mason University
Bruce Alper, Cambridge Parallel Processing
Eugene Cloud, Martin Marietta - Orlando
Robert Graybill, Martin Marietta - Baltimore
Rudy Faiss, Loral Defense Systems - Akron
John Nickolls, MasPar Computer Corporation
Bill Wren, Honeywell Technology Center




TUTORIALS
---------


Monday, February 6


Morning Tutorials
8:30 am-12 noon


Tutorial 1A
Introduction to Parallel Computing
Vipin Kumar, University of Minnesota


Parallel computers containing thousands of processors are now available
commercially. Powerful parallel computers can also be constructed by
interconnecting state-of-the-art workstations via off-the-shelf switches.
These computers provide several orders of magnitude more raw computing
power than traditional supercomputers at much less cost. They open up new
frontiers in applications of computers, as many previously unsolvable
problems can be solved if the raw computation power of these machines can
be used effectively. This has created a number of challenges for
programmers, for example: How should these machines be programmed? What
algorithms and data structures should be used for these machines? How
should one analyze the quality of the algorithms designed for these
parallel computers? This tutorial will provide a general overview of
parallel architectures (SIMD/MIMD, shared versus distributed memory,
interconnection networks), routing (store-and-forward vs. worm-hole
routing), examples of currently available MPPs and workstation clusters,
basic communication operations (such as 1-to-all broadcast, all-to-all
broadcast, scan), basic metrics for performance and scalability (speed up,
efficiency, isoefficiency), some example parallel algorithms (dense and
sparse matrix algorithms, FFT, graph algorithms) and parallel programming
paradigms. This tutorial would be useful for anyone interested in solving
problems on parallel computers.


Tutorial 2A
Parallel Algorithm Design
H. J. Siegel, Purdue University


Parallel machines with 64 to 64,000 processors are commercially
available -- the challenge is to transform a given task into a parallel
algorithm that executes effectively. A variety of techniques for mapping
tasks onto large-scale parallel machines are explained and demonstrated
through the use of parallel algorithm case studies. Models of SIMD
(synchronous), MIMD (asynchronous), and reconfigurable mixed-mode parallel
systems are described and contrasted. This tutorial focuses on the design
of data-parallel algorithms that can be executed using any of these three
modes of parallelism. Issues addressed include choices for data
distribution, effect on execution time of increasing the number of
processors used (scalability), influence of network topology, use of SIMD
vs. MIMD vs. mixed-mode parallelism, the impact of partitioning the system
for subtask parallelism, and the difficulty of automatic parallelization
of serial algorithms. The tasks used for the case studies include
window-based image processing, recursive doubling, parallel prefix, global
histogramming, 2-D FFT, and sorting. The tutorial concludes with a
discussion of some of the alligators (problems) that make the design and
use of large-scale parallel processing systems difficult.


Tutorial 3A
Multigranular Computing
Umakishore Ramachandran, Georgia Institute of Technology


This tutorial will cover the granularity spectrum -- from fine-grained to
coarse-grained -- of parallel computation. The topics to be covered include
architectural examples from discrete points in this granularity spectrum
(including Maspar MP-2, KSR-1, and CM-5); application domains that could
benefit from such a multigranular environment; analysis of kernels of
applications to identify types of parallelism; determining the match
between algorithms and architectures; performance metrics for making the
right architectural choice for algorithms; and several case studies drawn
from reconfigurable architectures to networked parallel computers of
different granularities. Who should attend: Practitioners who are
designing and developing parallel architectures and software tools for
parallel machines; and university researchers (both architects and
application developers) working in specific points of the granularity
spectrum. The assumed background is a basic understanding of computer
architecture, a rudimentary knowledge of parallel programming, and an
appreciation for algorithm design.




Afternoon Tutorials
1:30-5:00 pm


Tutorial 1B
Compilers and Runtime Support for
Distributed Memory Machines
J. Ramanujam, Louisiana State University and
Alok Choudhary, Syracuse University


For high-performance computing to be successful, significant advances in
programming tools, especially compilers and runtime systems, are critical.
This tutorial is designed to cover issues that are central to compiler and
runtime optimizations of scientific codes written in languages like HPF
for distributed memory and massively parallel machines. Specifically, this
tutorial will address three main issues: general principles, analysis and
optimization for parallel machines; specific issues and optimizations
related to compiling programs in a portable manner on many distributed
memory machines; and the required runtime support and optimizations for
such compilers. This tutorial is intended for compiler writers for
languages like High Performance Fortran (HPF) for high performance
architectures, architects for parallel computers, researchers in high
performance computing, graduate students, and application developers.




Tutorial 2B
Teraflop 101: Introduction to
Massively Parallel Computation
William O'Connell, AT&T Bell Laboratories; Phil Hatcher, University of New
Hampshire; David Levine, Argonne National Laboratory; Thomas W.
Christopher, Illinois Institute of Technology; and George K. Thiruvathukal,
R. R. Donnelley and Sons Company


The course features a variety of techniques for mapping problems onto
massively parallel processing (MPP) machines. We begin by providing an
overview of the trends that are evolving MPP computing, followed by a
discussion on high performance computing architectures and design
alternatives. The course will then focus on demonstrating and studying
algorithm case studies for a variety of programming models, including
static/dynamic partitioning (replicated workers), data-parallel,
dataflow/macro-dataflow, functional, and pattern (event) driven
techniques. We will discuss how and when to apply these techniques for a
variety of problems.




Tutorial 3B
Improving Performance with
Message Driven Execution
L.V. Kale, University of Illinois at Urbana-Champaign


Although machines with extremely high raw performance are available today,
on most real applications, one falls short of achieving even a reasonable
fraction of their peak performance. The problem becomes worse as one
attacks increasingly irregular problems. Message driven execution as
distinct from message passing has emerged as a promising technique that
can help improve performance in many such situations. This technique
involves an adaptive scheduling strategy based on the availability of
messages. As a result, message driven programs can often adapt to
variations in runtime conditions easily, and tolerate communication
latencies effectively. Also, they can naturally interleave execution of
multiple independent subcomputations. This tutorial covers the exposition
of message driven strategy and its practical application to real-life
problems. After a few examples to motivate the need for a message driven
strategy, we will present various ways in which such strategies can be
expressed. These include multi-threading, asynchronous messages, active
messages and interrupts, and object oriented languages such as Charm++. We
will then concentrate on specific techniques used to develop message
driven programs, and discuss how to identify opportunities for introducing
message driven scheduling, how to modify the algorithm to create such
opportunities and how to design message driven algorithms. Message driven
program components can be composed with relative ease and without
performance loss. We will discuss how this compositionality can be
exploited to effectively reuse parallel modules. The tutorial includes
several examples as well as case studies.






WORKSHOPS
---------


Frontiers'95 features three workshops in areas of rapidly growing interest
to the high-performance computing community. The topics of the workshops
are 1) Scalable I/O, 2) Peta-FLOPS Computing, and 3) Templates for
Parallel Applications. The workshops will provide an excellent
opportunity to exchange experience and inspire research directions through
presentations and extensive discussions in an informal setting. A one hour
combined session for all workshops will be held in order to correlate and
discuss the findings of all workshops. For more information contact a
workshop organizer, as appropriate, or the workshops chair.


Sunday, February 5


8:30 am-4:30 pm
Workshop A: Scalable Input/Output for
High-Performance Computers
Alok Choudhary, Syracuse University, (choudhar@npac.syr.edu); Paul
Messina, Caltech, (messina@ccsf.caltech.edu); and Terry Pratt,
USRA/CESDIS, (pratt@cesdis1.gsfc.nasa.gov)


Scalable I/O subsystems for parallel computers are a critical issue in the
effective use of these machines for production workloads. This workshop
will address several issues from architecture to software for scalable I/O
on high-performance systems. Of special interest are application
requirements and characterization, language support, runtime libraries,
file systems, network I/O support, etc. This workshop will provide a look
at recent advances and new research initiatives in hardware and software
for these areas. The workshop will consist of invited and submitted
contributions as well as substantial open discussions.




Monday, February 6


8:30 am-4:30 pm
Workshop B: The Peta-FLOPS Frontier
Thomas Sterling, USRA/CESDIS, (tron@chesapeake.gsfc.nasa.gov);
John Dorband, NASA GSFC, (dorband@nibbles.gsfc.nasa.gov); and
Michele O'Connell, USRA/CESDIS, (michele@usra.edu)


Petaflops is a measure of computer performance equal to a million billion
operations per second. It is comparable to more than ten times all the
networked computing capability in America and is ten thousand times faster
than the world's most powerful massively parallel computer. A Petaflops
computer is so far beyond anything within contemporary experience that its
architecture, technology, and programming methods may require entirely new
paradigms in order to achieve effective use of these computing systems.
New research directions in enabling technologies required to address these
challenges are the driving motivation for exploring this advanced field.
Investigation in these directions needs to be initiated in the immediate
future to have timely impact. A one day workshop, "The Petaflops
Frontier," is being organized to explore this new domain. The objectives
of the workshop are 1) to determine the opportunities and demands of
application programs at this scale, and 2) to identify the diverse and
possibly speculative technologies and computing structures that might
contribute toward the advancement of this exciting goal. The Petaflops
Frontier Workshop is being organized to complement the findings of the
Pasadena Workshop on Enabling Technologies for Petaflops Computing
conducted one year ago and is structured to address some of the
recommendations that came out of that historical meeting. Contributions
are being solicited on the topics of a) Application Scaling to Petaflops,
and b) Advanced Technologies and Architectures for Petaflops Computers.


8:30 am-12 noon
Workshop C:
Templates: Building Blocks for Portable Parallel Applications
Geoffrey Fox, Syracuse University, (gcf@npac.syr.edu);
Jack Dongarra, University of Tennessee, (dongarra@cs.utk.edu); and Robert
Ferraro, NASA JPL, (ferraro@zion.jpl.nasa.gov)


The term "Templates" has recently gained notoriety in the parallel
processing community, yet is an evolving concept with several definitions.
The primary motivation for constructing templates is to rapidly infuse
into common usage state-of-the-art algorithms in a form which can be
adapted to specific application requirements. This implies that the
template retains the desired numerical properties but is cast in a form
which is independent of parallel architecture, data layout, and
programming language. Many users would like to see templates go beyond
pseudo-code, which can be found in textbooks and research papers, to
become objects which are directly compilable on multiple architectures.
This workshop will explore the issues involved in constructing an
algorithm template which is portable, scalable, and adaptable to
application requirements, yet retains the numerical properties which make
the algorithm desirable in the first place.


4:30-5:30 pm Plenary Workshop Session


5:30-7:00 pm Welcoming Reception




HOTEL INFORMATION
-----------------


McLean Hilton, McLean, Virginia
Cut-Off Date: JANUARY 14, 1995
Mail to: The McLean Hilton Reservations, 7920 Jones Branch Drive , McLean,
Virginia 22102-3308 USA.
Phone: 703-761-5111 (8am-5pm weekdays); Fax: 703-761-5100.


Rates: $112 single or double accomodation
___King ___Double/Double Number of people in room:________________
___Smoking room ___Non-smoking room ___Handicapped


please type or print:


Name __________________________________________________________


Sharing With __________________________________________________


Company/University __________________________________________


Address/Mailstop __________________________________________


City/State/Zip/Country _________________________________________________


Daytime Phone_____________________ Evening Phone___________________


Arrival Date______________________ Arrival Time_____________


Departure Date___________________ Departure Time___________


Method of Payment
____Personal Check ___American Express ___VISA ___MasterCard ___Diners Club


___Carte Blanche


Cardholder Name_____________________ Signature______________________


Card Number________________________________ Exp. Date_________


Amount $________ Today's Date______________


Rooms will be held until 6 p.m. unless guaranteed with an accepted credit
card. Check-in time is after 3 p.m.. Check-out time is noon. Room charges
are subject to a 6.5% tax. Reservation requests can be made until January
14, 1995. After this date, reservations will be based upon room and rate
availability. Cancellations made after 6 p.m. of check-in day will be
billed one night's room and tax. Government rates are available.
Complimentary hotel van service is available by reservation within a five
mile radius of the hotel. This area includes the West Falls Church Metro
station which connects to National Airport and Amtrak (Union Station). For
information on airport limousine services or to make van reservations,
contact the McLean Hilton at 703-847-5000.




Driving Directions to the McLean Hilton


>From Dulles International Airport: Dulles Access Road East to Exit 8
(Spring Hill Road). Must be in the far right toll lane. After toll, turn
right onto Spring Hill Road. Turn left at first light onto Jones Branch
Drive. Hotel is one mile on left.


>From National Airport: George Washington Parkway to I-495 South to Exit
11B (Tysons Corner/Rt. 123 South exit). Turn right at first light onto
Tysons Boulevard. See directions below from Tysons Boulevard.


>From McLean: Rt. 123 South to Tysons Boulevard (the first right after the
I-495 junction). Turn right at light onto Tysons Boulevard. See directions
below from Tysons Boulevard.


>From Fairfax and Vienna: Rt. 123 North to Tysons Boulevard (the last left
before I-495 junction). Turn left at light onto Tysons Boulevard. See
directions below from Tysons Boulevard.


>From Richmond and South: I-95 North to I-495 North (Rockville). Take I-495
North to Exit 11B (Tysons Corner/Rt. 123 South exit). Turn right at first
light onto Tysons Boulevard. See directions below from Tysons Boulevard.


>From Southwestern Maryland and North, via I-270: I-495 South to Exit 11B
(Tysons Corner/Rt. 123 South). Turn right at the first light onto Tysons
Boulevard. See directions from Tysons Boulevard.


>From Washington, D.C. Downtown: I-66 West to Exit 20
(I-495/Baltimore/Dulles Airport) to Exit 10A (Tysons Corner/Rt. 123
South). Continue on Rt. 123 South under I-495; turn right at the next
light after overpass onto Tysons Boulevard. See directions from Tysons
Boulevard.


>From New York, Philadelphia and Baltimore, via I-95 South:I-95 South to
I-495 West, Exit 27 (Silver Spring). I-495 West will turn into I-495 South
while driving. I-495 South to Exit 11B (Tysons Corner/Rt. 123 South exit).
Turn right at first light onto Tysons Boulevard. See directions from
Tysons Boulevard.


>From Tysons Boulevard: Continue on Tysons Boulevard for 1 block to light
(Galleria Drive/Westpark Drive). Turn right. Go to next light (Jones
Branch Drive), turn right. The McLean Hilton is located two blocks on
right.




REGISTRATION FORM: FRONTIERS 95 - REGISTER TODAY!
-----------------


To register, just return this form to:
Frontiers '95 Registration, IEEE Computer Society, 1730 Massachusetts
Ave., N.W., Washington, DC 20036-1992; FAX (202)728-0884. For information
call (202)371-1013 (sorry, no phone registrations)


please type or print:


Name _______________________________________________________________


Company/University __________________________________________________


Address/Mailstop ____________________________________________________


City/State/Zip/Country _______________________________________________


Daytime Phone Number_____________________ FAX Number________________


IEEE/CS Membership Number ____________________________


Do you have any special needs? _______________________________________


For mailing list purposes: this is my: ___Business ___Home Address


Do not include my mailing address on: ___Non-society mailing lists
___Meeting Attendee lists


Conference - February 7-9
Please check appropriate fee
Advance Late
(until 1/7/95) (after 1/7 & before 1/14)
Member ___$250 ___$300
Nonmember ___$315 ___$375
Student ___$65 ___$65


Tutorials - February 6
Price is per tutorial_please check appropriate fee
Advance Late
(until 1/7/95) (after 1/7 & before 1/14)
Member ___$150 ___$180
Nonmember ___$190 ___$225
Please check the tutorial(s) you wish to attend:
___1A: Introduction to Parallel Computing (AM)
___2A: Parallel Algorithm Design (AM)
___3A: Multigranular Computing (AM)
___1B: Compilers and Runtime Support for Distributed Memory Machines (PM)
___2B: Teraflop 101: Introduction to Massively Parallel Computation (PM)
___3B: Improving Performance with Message Driven Execution (PM)


Workshops - February 5-6
Please check the workshop(s) you wish to attend (no fee):
___A: Scalable Input/Output for High-Performance Computers (2/5)
___B: The Peta-FLOPS Frontier (2/6)
___C: Templates: Building Blocks for Portable Parallel Applications (2/6)


Payment
Conference Fee $ _______
Tutorial Fee $ _______
Total Enclosed (in U.S. dollars) $ _______


Payment must be enclosed. Please make checks payable to IEEE Computer
Society. All payments must be in U.S. dollars, drawn on U.S. banks.


Method of Payment Accepted
___Personal Check ___ Company Check
___Traveler's Check ___ VISA
___MasterCard ___ American Express ___Diners Club
___Purchase Order


Card Number ________________________________________


Expiration Date ___________


Cardholder Name __________________________________


Signature ____________________________________


Member and non-member registration fees include conference and/or
tutorial/workshop attendance, refreshments at breaks. Conference fee
includes welcoming reception, conference banquet, luncheon and one copy of
the conference proceedings. Workshop fees include lunch. Student
registration fee includes two conference social events, refreshments at
breaks, and one copy of the conference proceedings (no luncheon and no
banquet). We reserve the right to cancel a tutorial due to insufficient
participation or other unforeseeable problems.


Written requests for refunds must be received in the IEEE Computer Society
office no later than 1/7/95. Refunds are subject to a $50 processing fee.
All no-show registrations will be billed in full. Students are required to
show current picture ID cards a t the time of registration. Registrations
after 1/14/95 will be accepted on-site only.


If you are unable to attend this year's conference, you can order the
proceedings by calling:


1-800-CS-BOOKS or 1-714-821-8380;
Email: cs.books@computer.org.




ORGANIZING COMMITTEE
--------------------


GENERAL CHAIR
Joel Saltz, U. of Maryland


PROGRAM CHAIR
Dennis Gannon, Indiana U.


PROGRAM VICE CHAIRS
Algorithms: Mike Heath, U. of Illinois
Applications: Ridgway Scott, U. of Houston
Architectures: Thomas Sterling, CESDIS/NASA GSFC
Systems Software: Fran Berman, UC - San Diego


DEPUTY GENERAL CHAIR
Paul Havlak, U. of Maryland


TUTORIALS CHAIR
Sanjay Ranka, Syracuse U.


WORKSHOPS CHAIR
Tarek El-Ghazawi, George Washington U.


EXHIBITS CHAIR
Jerry Sobieski, U. of Maryland


PUBLICITY CHAIR
Mary Maloney Goroff, Caltech


INDUSTRY INTERFACE CHAIR
James Fischer, NASA GSFC


REGISTRATION CHAIR
Andrea Busada, U. of Maryland


FINANCE CHAIR
Larry Davis, U. of Maryland


LOCAL ARRANGEMENTS CHAIR
Johanna Weinstein, U. of Maryland


PUBLICATIONS CHAIR
Joseph Ja'Ja', U. of Maryland


AUDIO VISUAL
Cecilia Kullman, U. of Maryland


STEERING COMMITTEE
R. Michael Hord (Chair)
Hank Dardy, NRL
Larry Davis, U. of Maryland
Judy Devaney, NIST
Jack Dongarra, U. of Tennessee Knoxville, ORNL
John Dorband, NASA GSFC
James Fischer, NASA GSFC
Paul Messina, Caltech/JPL
Merrell Patrick, NSF
David Schaefer, George
Mason U.
Paul Schneck, MITRE
H. J. Siegel, Purdue U.
Francis Sullivan, SRC
Pearl Wang, George Mason U.


PROGRAM COMMITTEE
Ian Angus, Boeing
Ken Batcher, Kent State U.
Donna Bergmark, Cornell Theory Center
Fran Berman, UC San Diego
Tom Blank, Microsoft
Randy Bramley, Indiana U.
Marina Chen, Yale U.
Alok Choudhary, Syracuse U.
John Dorband, NASA GSFC
Tarek El-Ghazawi, George Washington U.
Charbel Farhat, U. of Colorado
Ian Foster, Argonne National Lab
Geoffrey Fox, Syracuse U. - NPAC
Joan Francioni, U. of Louisiana
Kyle Gallivan, U. of Illinois
Satya Gupta, Intel
John Gurd, U. of Manchester
Paul Havlak, U. of Maryland
Michael Heath, U. of Illinois
Jim Hendler, U. of Maryland
Joseph Ja'Ja', U. of Maryland
Carl Kesselman, CalTech
Robert Knighten, Intel
Chuck Koelbel, Rice U.
Monica Lam, Stanford U.
Robert Martino, NIH
Paul Messina, CalTech/JPL
Michael Norman, NCSA
Merrell Patrick, NSF
Serge Petiton, Etablissement Technique Central de l'Armement
Constantine Polychronopolous, U. of Illinois
Bill Pugh, U. of Maryland
Sanjay Ranka, Syracuse U.
Dan Reed, U. of Illinois
David Schaefer, George Mason U.
Ridgway Scott, U. of Houston
Marc Snir, IBM
Guy Steele, TMC
Thomas Sterling, NASA GSFC
Ken Stevens, NASA Ames
Rick Stevens, Argonne National Lab
Alan Sussman, U. of Maryland
Lew Tucker, TMC
Alex Veidenbaum, U. of Illinois
Uzi Vishkin, U. of Maryland
Pearl Wang, George Mason U.
Joel Williamson, Convex
Steve Zalesak, NASA GSFC
Mary Zosel, Lawrence Livermore National Lab






--
Dr. Paul Havlak Dept. of Computer Science, A.V. Williams Bldg
Research Associate U. of Maryland, College Park, MD 20742-3255
High-Performance Systems Lab (301) 405-2697 (fax: -2744) havlak@cs.umd.edu
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.