|Workshop : Compiler Optimizations on Distributed Memory Systems email@example.com (Santosh Pande) (1995-08-29)|
|From:||Santosh Pande <firstname.lastname@example.org>|
|Keywords:||conference, storage, parallel|
|Date:||Tue, 29 Aug 1995 20:41:47 GMT|
Workshop : Compiler Optimizations on Distributed Memory Systems
Santosh Pande, Co-ordinator
School of Electrical Engineering and Computer Science
Division of Computer Science
Athens, Ohio 45701
E-mail : email@example.com
Telephone : (614)-593-1251
A half day workshop on `Compiler Optimizations on Distributed Memory
Systems' is planned at the 7th IEEE Symposium
on Parallel and Distributed Processing (Dallas Symposium) at
San Antonio, TX.
The workshop will be conducted
on October 25th (Wednesday) from 1:30 pm to 5:00 pm.
The general format of the workshop is as follows:
1. Introduction of the theme, the issues and the speakers
2. Invited talks each for a duration of 25 minutes including
3. Panel discussion of the speakers
It is intended that the workshop presents inter-relationships
and issues in different compilation phases from dependence detection
to mapping and code generation.
The following topics are of special interest in the workshop:
-- Exact dependence analysis
-- Interprocedural dataflow analysis
-- Communication/data locality optimizations
-- Optimizing loop transformations in HPF framework
-- Code generation issues such as address and communication generation
-- Mapping task parallelism
The goal of the workshop is to present a comprehensive, integrated
picture of the area by discussing the present research status, open
directions and the projected future work on the above aspects of
compiling on distributed memory systems. Another important objective
is to bring out similarities and differences between compiling on
shared memory systems and on distributed memory systems to identify
the applicability of the techniques already developed for shared
memory to the distributed memory.
First, there will be six invited talks by distinguished researchers
in the field on topics ranging from dependence detection to mapping
and code generation. At the end of the talks, there will be a panel
discussion on `How difficult is automatic parallelization on
distributed memory systems?' (or in other words, `Is data parallel
programming only an interim solution or a long term one?')
Following is the program of the workshop :
Session I : 1:30 pm to 2:45 pm (Ballroom C)
(1) "Efficient and Exact Dependence Analysis Techniques in Practice"
Kleanthis Psarris, University of Texas
(2) "The Effectiveness of Interprocedural Automatic Parallelization"
Mary Hall, California Institute of Technology
(3) "Communication Analysis for Shared and Distributed Memory Machines"
Chau -Wen Tseng, University of Maryland, College Park
Break : 2:45 pm to 3:15 pm
Session II : 3:15 to 4:30 pm (Ballroom C)
(4) "Loop Transformations for Optimizing HPF Programs Prior to SPMD Code
Vivek Sarkar, Application Development Technology Institute (ADTI)
IBM Software Solutions Division.
(5) "Address and Communication Generation for Cyclic(k) Distributions"
J. Ramanujam, Louisiana State University, Baton Rouge
(6) "Exploiting Loop and Task Parallelism in Scheduling Iterative Task
Tao Yang, University of California, Santa Barbara
4:30 pm to 5:00 pm (Ballroom C)
Panel discussion of above speakers
Co-ordinator : Prof. Dharma P. Agrawal, North Carolina State University
Topic : `How difficult is automatic parallelization
on distributed memory systems?'
(or, `Is data parallel programming only
an interim solution or a long term one?')
Those interested in attending, please contact the co-ordinator
Santosh Pande for registration and other information.
Return to the
Search the comp.compilers archives again.