HiPC '95 venue change and advance program

gupta@cs.wmich.edu (Ajay)
Sun, 26 Nov 1995 22:51:26 GMT

          From comp.compilers

Related articles
HiPC '95 venue change and advance program gupta@cs.wmich.edu (1995-11-26)
| List of all articles for this month |

Newsgroups: comp.compilers
From: gupta@cs.wmich.edu (Ajay)
Keywords: conference, parallel, architecture
Organization: Western Michigan University--Computer Science Department
Date: Sun, 26 Nov 1995 22:51:26 GMT

Due to unavoidable circumstances, the venue for HiPC '95 has been changed
from Habitat World to another location in New Delhi- the Hyatt Regency.

Here are details of the meeting location:

Hyatt Regency
Bhikaiji Cama Place
Ring Road, New Delhi 110066
Tel: +91 (11) 688-1234
Fax: +91 (11) 688-6833, 687-6437
Telex: 031-61512 HYT IN

Hyatt Regency is located 16 kilometers(approx. 10 miles)
from the international airport and is about 8 kilometers
from the domestic airport.


Electronic version of the Advance Program and
updated meeting information can be retrieved from the
web using URL:

WWW: http://www.usc.edu/dept/ceng/prasanna/home.html


International Travel House Explorations has been appointed as
the Official Travel Agent for HiPC '95.
They have a list of alternate accomodations available in Delhi.
They can be reached at the following address.

Conference Management Services
International Travel House Explorations
14 A&B Community Centre
Basant Lok
Vasant Vihar
New Delhi 110 057
Attn: Mr. G.S. Sugara

TEL: 91-11-603 400
              91-11-687 5382

FAX: 91-11-687 6101
              91-11-688 7163

-------------ORIGINAL Advance Program Follows-----------------

DECEMBER 27-30, 1995
Hyatt Regency, NEW DELHI, INDIA


For easier reading, ask for a hard copy of this advance program.
Regina Morton,
Department of EE-Systems, EEB 200,
3740 McClintock Avenue,
University of Southern California,
Los Angeles, CA 90089-2562

email: morton@pollux.usc.edu
Fax: (213) 740-4418

Additional information regarding HIPC'95 events may be obtained on the
Web (using URL http://www.usc.edu/dept/ceng/prasanna/home.html) or
contact HiPC '95 General Co-Chair V. K. Prasanna (prasanna@halcyon.usc.edu).




Tutorials 1-4

Keynote Address
Technical Sessions I-III

Keynote Address
Technical Sessions IV-V

Keynote Address
Technical Session VI
Tutorials 5 - 6
Local Sightseeing Tour

Conference Registration Form

Hotel Registration Form

Sightseeing Tour To Visit Taj Mahal


Viktor K. Prasanna
University of Southern California

Vijay P. Bhatkar
Centre for Development of Advanced Computing, Pune

Sartaj Sahni
University of Florida

C. V. Subramaniam
Centre for Development of Advanced Computing, New Delhi

Ramesh Rao
University of California, San Diego

Ajay Gupta
Western Michigan University

D. N.Jayasimha
Ohio State University

S. K. Nandy
Indian Institute of Science

C. P. Ravikumar
Indian Institute of Technology, Delhi

Suresh Chalasani
University of Wisconsin, Madison

A. K. P. Nambiar
Centre for Development of Advanced Computing, Bangalore


Arvind, MIT
Vijay Bhatkar, C-DAC
Ian Foster, Argonne National Labs.
Anoop Gupta, Stanford University
David Kahaner, Asian Technology Information Program, Japan
Lionel Ni, Michigan State University
S. S. Oberoi, Dept. of Electronics, Government of India
Lalit M. Patnaik, Indian Institute of Science
Viktor K. Prasanna, USC, Chair
Jose Rolim, University of Geneva
Vaidy Sunderam, Emory University
Satish Tripathi, University of Maryland
David Walker, Oak Ridge National Labs.
K. S. Yajnik, CSIR Centre for Mathematical Modeling and Computer Simulation
Albert Y. Zomaya, University of Western Australia


Gul Agha, University of Illinois, Urbana Champaign
Dharma Agrawal, North Carolina State University
Prathima Agrawal, AT&T Bell Labs.
Selim G. Akl, Queen's University, Canada
Mike Atallah, Purdue University
N. Balakrishnan, Supercomputing Education and Research Centre, India
Prith Banerjee, University of Illinois, Urbana Champaign
Suresh Chalasani, University of Wisconsin, Madison
Yookun Cho, Seoul National University, Korea
Alok Choudhary, Syracuse University
Tim Davis, University of Florida
Eliezer Dekel, IBM Israel Research Centre
P. S. Dhekne, Bhabha Atomic Research Centre, India
Jack Dongarra, University of Tennessee
Afonso Ferreira, CNRS-LIP, France
Ajay Gupta, Western Michigan University
Oscar Ibarra, University of California, Santa Barbara
Mary Jane Irwin, Pennsylvania State University
S. S. Iyengar, Louisiana State University
Joseph Ja' Ja', University of Maryland
D. N. Jayasimha, Ohio State University
Krishna Kavi, University of Texas, Arlington
J. Mohan Kumar, Curtin University of Technology, Australia
Vipin Kumar, University of Minnesota
Steve Lai, Ohio State University
S. Lakshmivarahan, University of Oklahoma
Piyush Mehrotra, ICASE
V. Lakshmi Narasimhan, University of Queensland, Australia
David Nassimi, New Jersey Institute of Technology
David Padua, University of Illinois, Urbana-Champaign
D. K. Panda, Ohio State University
C. S. Raghavendra, Washington State University
S. Rajasekaran, University of Florida
C. Pandu Rangan, Indian Institute of Technology, Madras
N. Ranganathan, University of South Florida
Sanjay Ranka, Syracuse University
Ramesh Rao, University of California, San Diego
C. P. Ravikumar, Indian Institute of Technology, Delhi
Michel Raynal, IRISA, Paris
Ahmed Sameh, University of Minnesota
R. K. Shyamasundar, TIFR, India
H. J. Siegel, Purdue University
Satish Tripathi, University of Maryland
N. Tzeng, University of South Western Louisiana
Pearl Wang, George Mason University
K. S. Yajnik, C-MMACS, India



The meeting has been moved from Goa to New Delhi. See later part of this
text for information on Delhi.
Additional information about New Delhi and surrounding areas
can be retrieved from the Web using the URL:


The HiPC events include contributed technical papers, keynote addresses, a
panel, exhibits, vendor presentations, and tutorials. The tutorials will
be conducted on Wednesday and Saturday. Thursday through Saturday will
feature sessions for contributed papers, a panel discussion and industrial
track presentations, with a keynote address beginning the proceedings of
each day.


There will be 126 contributed technical papers to be presented in 18
technical sessions.


Thursday, December 28:
"Hot Machines"
Anant Agarwal
Massachusetts Institute of Technology

Friday, December 29:
"The Stanford DASH and FLASH Projects"
Anoop Gupta
Stanford University

Saturday, December 30:
"The Parallel Software Crisis - "The Problem" or just a Symptom?"
Uzi Vishkin
University of Maryland and Tel Aviv University


"Future Directions in High Performance Computing"
Panel Coordinator: Vipin Kumar, University of Minnesota


The following six tutorials will be held. The duration of each tutorial is
half day. The first four tutorials will be held on Wednesday and the last
two on Saturday afternoon.

Compiling for High Performance Architectures

Large Scale Scientific Computing

Code Optimization in Modern Compilers

Concurrent Computing with PVM and MPI

Massively Parallel Architectures: Past, Present, and Future.

Graph Partitioning for Parallel Computing


Companies and R&D laboratories are encouraged to present their exhibits at
the meeting. In addition, a full day of vendor presentations is planned. For
details, companies are encouraged to contact the Exhibits/Vendor
Presentations Chair by September 30,1995:
C. V. Subramaniam
Centre Coordinator, C-DAC
E-13, Hauz Khas
New Delhi 110016
Vox/Fax: +91 (11) 686 4084/3428
Internet: karnik@doe.ernet.in


The proceedings will be published by Tata McGraw-Hill Publishing Company
Limited. Extra copies and last year's proceedings may be obtained by
contacting Tata McGraw-Hill at +91 (11) 327 8253(Fax).


Refreshments will be served at several breaks throughout the day. In
addition, lunch will be provided on December 28, 29 and 30. On Thursday
evening a dinner event is planned. On Saturday following the lunch, there
will be a local sightseeing tour(of Delhi).


Use the tear-out form in the middle of the Advance Program booklet.
An ascii version of the registration form is included in this text.
Note that the form must be sent by November 25,1995 and received by
November 30,1995 for advance registration. Registration after November 30
will be accepted on-site only.


Additional information regarding HiPC'95 events may be obtained from the
Web using URL http://www.usc.edu/dept/ceng/prasanna/home.html or from
the HiPC '95 General Co-Chair V. K. Prasanna(prasanna@usc.edu).


Tutorial 1

Compiling for High Performance Architectures

J. Ramanujam, Louisiana State University

Who Should Attend: Researchers in high performance computing, graduate
students, application developers. This is not a tutorial on HPF but on
principles of compiling languages like HPF. The intended level of
presentation is 30% beginner, 50% intermediate and 20% advanced, and
familiarity with fundamental issues in parallel computing is assumed.

Course Description: The chief impediment to widespread application of
highly parallel computing is the extreme difficulty of writing application
code that is correct and efficient. This applies equally to writing new code
or conversion of existing code. Memory locality, memory latency,
parallelism and interprocessor communication interact in complex ways,
making it very difficult to achieve good performance. For high-performance
computing to be successful, significant advances in programming tools,
especially, compilers and runtime systems are critical. Several ongoing
research projects (at Rice, Stanford, Syracuse and Vienna among other
places) are aimed at a compilation system that uses data distribution
information provided by the user to derive efficient parallel code. High
Performance Fortran (HPF) is an informal industry standard (embraced by
several vendors) which supports data parallel programming (through the
FORALL construct and several new parallel intrinsic and library functions)
and good performance on Non-Uniform Memory Access (NUMA) multiprocessors
(through data mapping directives).

This tutorial is designed to cover issues that are central to compiler and
runtime optimizations of scientific codes written in languages like HPF for
distributed memory and massively parallel machines. Specifically, this
tutorial will address three main issues -- (1) general principles, analysis
and optimization for parallel machines; (2) specify issues and
optimizations related to compiling programs portable on many distributed
memory machines such as the Intel Paragon, IBM SP2, Cray T3D, and Network
of Workstations; and (3) the required runtime support, and optimizations
for such compilers.

Lecturer: J. Ramanujam received his Ph.D. degree in Computer Science from
the Ohio State University in 1990. He is currently an Associate Professor
in the Department of Electrical and Computer Engineering, Louisiana State
University, Baton Rouge. He is a recipient of an NSF Young Investigator
Award in 1994. His research interests are in the area of parallelizing
compilers, operating systems and programming environments for parallel
computing, and computer architecture.

Tutorial 2

Large Scale Scientific Computing

S. Lakshmivarahan,
University of Oklahoma

Who Should Attend: This tutorial must be of interest to scientists,
engineers, in the National Laboratories, and industries, as well as
educators, and graduate students who are interested in understanding the
basic principles and challenges in the new and emerging discipline of
scientific computing arising out of the interaction between algorithms,
architectures, and applications.

Course Description: Large Scale Scientific Computing lies squarely at the
intersection of several evolving and well established disciplines -
architecture of parallel and vector computers, parallel algorithms for
matrix problems, technology of vectorizing compilers and mathematical
software that exploits the algorithm-architecture combination. Our aim in
this tutorial is to provide a survey of the state of the art of several
well understood principles and techniques in this area.

AN OVERVIEW OF PARALLEL ARCHITECTURES: shared memory vs distributed
memory machines, SIMD vs MIMD paradigm, memory hierarchy, vector
registers, cache and main memory. A review of commercially available
PERFORMANCE MEASURES: speedup, efficiency, redundancy,
utilization factor, and the degree of parallelism. Amdahl's law and
its variations, parallel complexity class, associative fan-in

PARALLEL PROGRAMMING: An overview of PVM, Fortran90, High Performance
Fortran. A REVIEW OF MATRIX ALGEBRA: An overview of basic matrix-vector
operations. The role of BLAS-1,2, and 3 routines. MATRICIES IN SCIENTIFIC
COMPUTATION: Discretization of partial differential equations, the notion
of stencils, Order of error in approximation of space and time derivatives,
boundary conditions, the notion of colored grid-red-black coloring and
SYSTEMS: The method of cyclic reduction and cyclic elimination applied to
bidiagonal, tridiagonal and block tri-diagonal systems, LU and
Cholesky decomposition, domain decomposition. APPLICATIONS: An overview of
the application of parallelism to certain meteorological problem
especially the one dealing data assimilation scheme using the variational
adjoint method.

Lecturer: Lakshmivarahan is presently the George Lynn Cross Research
Professor at the University of Oklahoma. His research interests include
interconnection networks, parallel algorithms and their applications. He
has offered short courses/tutorials in Canada, Germany, India, Mexico,
Taiwan and USA. He obtained his PhD from the Indian Institute of Science
in 1973 and has held postdoctoral/faculty positions at the IIT Madras,
Brown university and Yale university. He is a Fellow of IEEE and ACM. He
is the author/coauthor of three books relating to parallel computing,
learning algorithms, and their applications.

Tutorial 3

Code Optimization in Modern Compilers

Krishna V. Palem, New York University
Vivek Sarkar; IBM Santa Teresa Labs.

Who Should Attend: The tutorial is relevant to systems programmers and
analysts, scientific programmers, computer designers, technical managers,
mathematics and computer science teachers, or anyone facing the prospect
of building, modifying, maintaining, writing or teaching about compilers.
At the conclusion of the tutorial, the attendees should be knowledgeable
about several modern optimization techniques used in compilers, from the
viewpoint of the compiler user as well as of the compiler designer.

Course Description: The primary goal of this tutorial is to provide an
in-depth study of state-of-the-art code optimization techniques used in
compilers for modern processors. The performance gap between optimized and
unoptimized code continues to widen as modern processors evolve. This
evolution includes hardware features such as superscalar and pipelined
functional units for exploiting instruction-level parallelism, and
sophisticated memory hierarchies for exploiting data locality. These
hardware features are aimed at yielding high performance, and are
critically dependent on the code being optimized appropriately to take
advantage of them. Rather than burdening the programmer with this
additional complexity during code development, modern compilers offer
automatic support for code optimization.

The tutorial is self-contained and begins with a detailed view of the
design of optimizing compilers. The rationale guiding the important
decisions underlying the design will be discussed. A typical optimizing
compiler contains a sequence a restructuring and code optimization
techniques. A selected collection of state-of-the-art optimizations and
their interactions will be discussed.

The optimizations covered are most relevant for RISC processors, such as
the IBM RS/6000, PowerPC, DEC Alpha, Sun Sparc, HP PA-RISC, and MIPS, and
third-generation programming languages such as Fortran and C.

The summary of topics is as follows: Structure of optimizing compilers,
Loop transformations, Instruction scheduling for pipelined and superscalar
processors, Register allocation, Overview of optimizing compiler systems
from industry.

Lecturers: Krishna V. Palem has been on the faculty of the Courant Institute
of Mathematical Sciences, NYU, since September 1994. Prior to this, he has
been a research staff member at the IBM T. J. Watson Research Centre since
1986, and an advanced technology consultant on compiler optimizations at
the IBM Santa Teresa Laboratory since 1993. He is an expert in the area of
compilation and optimization for superscalar and parallel machines and has
been invited to teach short courses and lecture internationally, on these

Vivek Sarkar is a Senior Technical Staff Member at the IBM Santa Teresa
Laboratory, and is also manager of the Application Development Technology
Institute (ADTI). He joined IBM in 1987, after obtaining a Ph.D. from
Stanford. His research interests are in the areas of program
optimizations, loop transformations, partitioning, scheduling,
multiprocessor parallelism, cache locality, instruction parallelism, and
register allocation. He is the author of several papers in the areas of
program optimization and compiling for parallelism, as well as the book
titled Partitioning and Scheduling Parallel Programs for Multiprocessor Execution.

Tutorial 4

Concurrent Computing with PVM and MPI

Vaidy Sunderam, Emory University
D. N. Jayasimha, Ohio State University

Who Should Attend: The tutorial is intended for scientists and engineers
(including graduate students) who wish to learn message passing
programming and PVM to parallelize their applications. The tutorial
assumes knowledge of FORTRAN or C, some programming experience, and
rudimentary knowledge of Unix.

Course Description: Concurrent computing, based on explicit parallelism and
the message passing paradigm, is emerging as the methodology of choice for
high performance computing. PVM (Parallel Virtual Machine) is a software
system for concurrent computing on multiple heterogeneous platforms,
including clusters and networks of workstations. This tutorial will
describe the PVM system, and explain aspects relating to (a) principles of
message-passing programming and network computing; (b) application
programming using the PVM API; (c) obtaining, installing and operating the
PVM software; (d) advanced topics including efficiency, profiling, and
other tools. In addition to PVM, the tutorial will also discuss the new MPI
standard, highlight similarities and differences between PVM and MPI, and
outline potential transition paths.

Lecturers: Vaidy Sunderam is a faculty member in the Department of Math &
Computer Science at Emory University, Atlanta, USA. His research interests
are in parallel and distributed processing, with a focus on
high-performance concurrent computing in heterogeneous networked
environments. He is the original architect of the PVM system for network
based concurrent computing, and co-principal investigator of the Eclipse
research project, a second generation system for high performance
distributed supercomputing. His other research interests include
distributed and parallel I/O and data management, communications protocols,
parallel processing tools, and collaborative concurrent computing systems.
He is the recipient of several awards and research grants, has authored
numerous articles on parallel and distributed computing, and is a member
of ACM and IEEE.

D. N. Jayasimha obtained his Ph.D. from the Centre for Supercomputing
Research and Development, University of Illinois in 1988. He has been on
the faculty at The Ohio State University in Columbus, Ohio since then.
During 1993-94, he was a Visiting Senior Research Associate at the NASA
Lewis Research Centre, Cleveland, OH where he worked on parallelizing
applications using PVM and other message passing libraries. He has offered
tutorials on message passing and PVM at the Ohio Aerospace Institute and
NASA, and at 1994 International Workshop on Parallel Processing. His
research interests are in the areas of communication and synchronization in
parallel computation, parallel architectures, and parallel applications.


Keynote Address

Hot Machines
Anant Agarwal
Massachusetts Institute of Technology

Session I-A
Distributed Structures/Systems
Chair: Satish Tripathi
University of Maryland

Dynamic object allocation for distributed object oriented databases
J. Lim, A. Hurson, The Pennsylvania State University, and L. Miller, Iowa
State University.

Distribution of dynamic data structures on a multi-processor
M. Dave and Y. Srikant, Indian Institute of Science.

Portable distributed priority queues with MPI
B. Mans, James Cook, University of North Queensland.

Distributed dynamic data-structures for parallel adaptive mesh-refinement
M. Parashar and J. Browne, University of Texas, Austin.

Networks of pre-emptible reactive processes: An implementation
B. Rajan and R. Shyamasundar, Tata Institute of Fundamental Research.

Reusable single-assignment variables in a distributed shared memory system
M. Mandal, IBM Corp., D. Jayasimha, D. Panda, and P. Sadayappan, Ohio State

Near-optimal global heterogeneous scheduling
R. Freund, Naval Research and Development Centre

Session I-B
Image Processing
Chair: N Ranganathan
University of South Florida

Hough transform on reduced mesh of trees
S. Krishnamurthy and S. Iyengar, Louisiana State University.

Parallel implementation of the PBR algorithm for cone beam reconstruction
K. Rajan, L. Patnaik, and J. Ramakrishna, Indian Institute of Science.

Hough transform on a reconfigurable array of processors with wider bus
S. Lee, S. Horng, National Taiwan Institute of Technology, T. Kao,
Kuang Wu Institute of Technology and Commerce, and H. Tsai, National
Taiwan Institute of Technology.

A parallel wavelet image block-coding algorithm
A. Uhl, Research Institute for Software Technology.

High performance custom computing for image segmentation
N. Ratha and A. Jain, Michigan State University.

Data-mapping on high performance parallel architectures for 2-D wavelet
transforms: Summary of an on-going HPCC investigation
P. Radhakrishnan, N. Gupta, L. Kanal, and S. Raghavan, University of Maryland.

Session I-C
Memory Systems
Chair: Bhabani Sinha
Indian Statistical Institute

Improving the performance of sequential consistency in cache coherence
W. Hu, Academia Sinica.

Effectiveness of hardware-based and compiler-controlled snooping cache
protocol extensions
F. Dahlgren, J. Skeppstedt, and P. Stenstrom, Lund University.

Checkpointing and rollback recovery in SCI based shared memory systems
S. Kalaiselvi and V. Rajaraman, Indian Institute of Science.

Multi-version caches for multiscalar processors
M. Franklin, Clemson University.

Avoiding the use of buffers in skewed memory systems for vector processors
A. Corral and J. Llaberia, Universitat Politecnica de Catalunya.

Verification of directory based cache coherence protocols
K. Gopinath, Indian Institute of Science.

Investigating the use of cache as local memory
L. John, R. Reddy, V. Kammila, and P. Maurer, University of South Florida.

Session II-A
Chair: Eliezer Dekel
IBM - Israel

Design, implementation, and performance evaluation of a parallel
distributed file system
Ajay Kamalvanshi, S. K. Ghoshal, and R. C. Hansdah, Indian Institute of Science.

Communication strategies for out-of-core programs on distributed memory
R. Bordawekar and A. Choudhary, Syracuse University.

Fair disk schedulers for high performance computing systems
J. Haritsa and T. Pradhan, Indian Institute of Science.

Asynchronous processor I/O through optical communications and parallel
T. Guan and S. Barros, Queen Mary and Westfield College, University of London.

Analyzing performance using priority I/O instrumentation, measurement, and
analysis tool
S. Vander Leest and R. Iyer, University of Illinois at Urbana-Champaign.

I/O scheduling tradeoffs in a high performance media-on-demand server
D. Jadav, C. Srinilta, and A. Choudhary, Syracuse University.

AVDA: A disk array system for multimedia services
G. Iannizzotto, A. Puliafito, S. Riccobene, and L. Vita, Universita di Catania.

Session II-B
Applications I
Chair: Sitharama Iyengar
Louisiana State University

Parallelisable scheme for laser beam and soliton propagation in linear and
nonlinear media
H. Singh, D. Subbarao, and R. Uma, Indian Institute of Technology, Delhi.

Normal and high pressure simulations by ab initio molecular dynamics with
parallel processors
B. Jagadeesh, R. Rao, and B. Godwal, Bhabha Atomic Research Centre.

Parallelisation strategies for Monte Carlo methods: A computational
chemistry application
A. Agarwal, P. Mehra, and C. Chakraborty, Indian Institute of Technology, Delhi.

Finite element analysis of nuclear structures on BARC parallel machine
C. Madasamy, R. Singh, H. Kushwaha, S. Mahajan, and A. Kakodkar, Bhabha
Atomic Research Centre.

Finite element analysis of structures of composite materials on parallel
M. Shah and K. Ramesh, Centre for Development of Advanced Computing.

Scalable parallel algorithms for sparse linear systems
A. Gupta, G. Karypis, and V. Kumar, University of Minnesota.

Towards operational severe weather prediction using massively parallel
A. Sathye, G. Bassett, K. Droegemeier, and M. Xue, University of Oklahoma.

Session II-C
Special Purpose Architectures
Chair: Kanad Ghose
SUNY, Binghampton

Power-efficient parallel DSP architectures for speech coding
Z. Gu and R. Sudhakar, Florida Atlantic University.

A reconfigurable parallel architecture to accelerate scientific computation
J. Becker, R. Hartenstein, R. Kress, and H. Reinig, University of Kaiserslautern.

Concerting processors for domain specific high performance architectures
S. Nandy, S. Balakrishnan, Indian institute of Science, and R. Narayanan,
Siemens Information Systems Ltd.

Parallel processing with ring connected tree
S. Basu, Banaras Hindu University, J. Dattagupta,
Indian Statistical Institute, and R. Dattagupta, Jadavpur University.

A high performance radar signal processor architecture based on
programming FIR filter modules
R. Kuloor, M. Rajakesari, P. Sundarmoorthy, and K. Rao, Electronics and
Radar Development Establishment.

A systolic algorithm and architecture for B-spline and Bezier curve
A. Karimpuzha and N. Ranganathan, University of South Florida.

A fast 64 x 64 multiplier without a complete matrix reduction on very
short switching buses
R. Lin, SUNY at Geneseo.

Session III-A
Chair: Sachin Maheshwari
IIT, Delhi

On the usage of simulators to detect inefficiency of parallel programs
caused by "bad" scheduling: the SIMPARC approach.
Y. Ben-Asher, University of Haifa and G. Haber, IBM Haifa.

Reducing communications by dynamic scheduling in parallel programs
A. Rawsthorne, J. Souloglou, A. Starr, and I. Watson, University of Manchester.

A new algorithm for dynamic scheduling of parallelizable tasks in real-time
multiprocessor systems
G. Manimaran, C. Siva Ram Murthy, and K. Ramamritham, Indian Institute of Technology,

On gang scheduling and demand paging
D. Marinescu and K. Wang, Purdue University.

Processor allocation on partitionable parallel architectures
R. Krishnamurti, Simon Fraser University and B. Narahari, George Washington

Scheduling and load sharing support for a network of heterogeneous
V. Saletore and J. Jacob, Oregon State University.

Instruction scheduling in the presence of structural hazards: An integer
programming approach to software pipelining
R. Govindarajan, E. Altman, and G. Gao, Memorial University of Newfoundland.

Session III-B
Applications II
Chair: Kirit Yajnik

A dynamic load balancing strategy for data parallel programs with large
data and its performance evaluation.
R. Mansharamani and A. Mukhopadhyay, Tata Research Design & Development Centre.

A commercial parallel CFD application on a shared memory multiprocessor
M. Venugopal, Silicon Graphics, R. Walters, and D. Slack, Aerosoft Inc.

Implementation of a parallel coupled climate model
M. Konchady, Goddard Space Flight Centre.

Parallel problems and experiences of Anupam machine
P. Dekhne, K. Rajesh, S. Mahajan, J. Kulkarni, and H. Kaura,
Bhabha Atomic Research Centre.

The efficiency of Gaussian elimination on multicomputers based on different
communication models
R. Reith and C. Ullrich, Basel University.

Performance benchmarking of LSDYNA3D for parallel vehicle impact
simulation on the Silicon Graphics Power Challenge
L.Miller, N. Bedewi, George Washington University, and R. Chu, Silicon Graphics.

Load-balancing in high performance GIS: A summary of results
S. Shekhar, University of Minnesota, S. Ravada, University of Minnesota, V. Kumar,
University of Minnesota, D. Chubb, and G. Turner, Army Research

Crash simulation with high performance computing
L. Kisielewicz, Nihon ESI, J. Clinckemaillie, J. Dubois, Engineering Systems
International, and G. Lonsdale, Engineering Systems International Gmb H.

Session III-C
Parallel Architectures
Chair: Prith Banerjee
University of Illinois

An improvement to dynamic stream handling in dataflow computers
V. Narasimhan and C. Amold, The University of Queensland.

SINAN: An argument forwarding multithreaded architecture
S. Onder and R. Gupta, University of Pittsburgh.

The multidimensional directed cycles ensemble networks for a multithreaded
T. Yokota, H. Matsuoka, K. Okamoto, H. Hirono, A. Hori, and S. Sakai,
Tsukuba Research Centre.

PACE: A multiprocessor architecture dedicated to graph reduction
M. Waite, T. Reynolds, and F. Ieromnimon, University of Essex.

Fiber optic multiprocessor interconnections using wavelength division
multiplexing and fixed tuned components
K. Desai and K. Ghose, SUNY Binghamton

PARSIT: A parallel algorithm reconfiguration simulation tool
G. Racherla, S. Killian, L. Fife, M. Lehmann, University of Oklahoma, and R. Parekh,
Iowa State University.

Heterogeneous processing in combined architectures
Y. Fet, Siberian Division of Russian Academy of Sciences.

Transputer based parallel processing environment
S. Pande, A. Jethra, M. Ghodgaonkar, and K. Gopalakrishnan, Bhabha Atomic
Research Centre.


Keynote Address

The Stanford DASH and FLASH Projects
Anoop Gupta
Stanford University

Session IV-A
System Software
Chair: Gul Agha
University of Illinois

Enhanced graphical tool for visualization of process-to-processor mapping
S. Lor, P. Maheshwari, and H. Shen, Griffith University.

A buffering scheme for multimedia systems
S. Bhaskara and S. Oberoi, T. U. L. SEEPZ.

POM: A virtual parallel machine featuring observation mechanisms
F. Guidec and Y. Maheo, IRISA-PAMPA.

Pawan: A multi-server Unix system on Mach
D. Bairagi, P. Bikkannavar, M. Baruah, J. Das, B. Gopal, P. Rao, A. Singhai,
J. Sriharsha, and G. Barua, Indian Institute of Technology, Kanpur.

DlG: A graph-based construct for programming distributed systems
J. Cao, L. Fernando, University of Adelaide, and K. Zhang, Macquarie University.

A data parallel library for workstation networks
N. Pahuja and G. M. Shroff, Indian Institute of Technology, Delhi.

Towards real-time method invocations in distributed computing environment
V. Wolfe, J. Black, University of Rhode Island, B. Thuraisingham, and P.
Krupp, MITRE Corp.

Session IV-B
Algorithms I
Chair: Sajal Das
University of N. Texas

An efficient sorting algorithm on mesh-connected computers with multiple
S. Tsai, S. Horng, National Taiwan Institute of Technology, T.
Kao, Kuang Wu Institute of Technology and Commerce, S. Lee, and H. Tsai,
National Taiwan Institute of Technology.

An algorithm for parallel data compression
P. Subrahmanya and T. Berger, Cornell University.

Fast parallel algorithms for finding extremal sets
H. Shen, Griffith University.

Parallel sorting algorithms for declustered data
E. Schikuta, University of Vienna.

A parallel algorithm for integrated floorplanning and global routing
S. Gao and K. Thulasiraman, Concordia University.

A parallel approximation scheme to construct the highest degree subgraph
for dense graphs
A. Andreev, University of Moscow, A. Clementi, University
of Geneva, and J. Rolim, University of Geneva.

Parallel implementations of evolutionary strategies
G. Greenwood, S. Ahire, A. Gupta, and R. Munnangi, Western Michigan University.

Session IV-C
Performance Analysis/Modeling
Chair: R. K. Ghosh
IIT, Kanpur

Performance of optically-interconnected multi-processors
A. Ferreira and N. Qadri, Ecole Normale Superieure de Lyon.

Design and performance analysis of a multiple-threaded multiple-pipelined
Y. Li and W. Chu, The University of Aizu.

The ALPSTONE project: An overview of a performance modeling environment
Walter Kuhn and Helmar Burkhart, University of Basel

The impact of interactive load on performance of distributed parallel programs
S. Setia and L. Nicklas, George Mason University.

Analysis of a realistic bulk service system
C. Wang, K. Trivedi, Duke University, A. Rindos, S. Woolet, and L. Groner, IBM Corp.

Effectiveness of software trace recovery techniques for current parallel
T. Scheetz, T. Braun, T. Casavant, J. Gannon, and M. Andersland, University of Iowa.

Request resubmission in a buffered interconnection system
P. Dietrich and R. Rao, University of California, San Diego.

Industrial Track:
Invited Vendor Presentations
Industrial Track Chair:
C. V. Subramaniam
C-DAC, Delhi

Industrial Track Session I

Session V-A
Compiler Techniques
Chair: Arvind

GAtest: An exact dependence test
S. Singhai, R. Ghosh, and S. Kumar, Indian Institute of Technology, Kanpur.

Automatic generation of parallel programs from POOSL
S. Choi, H. Kwon, and K. Yoo, ChungNam National University.

Code generation by coagulation for high performance computers
M. Karr, Software Options, Inc.

Integration of program comprehension techniques into the Vienna FORTRAN
compilation system
B. Di Martino, University "Federico II" of Naples
(Italy), B. Chapman, University of Vienna, G. Iannello, University
"Federico II" of Naples, and H. Zima, University of Vienna.

Scheduling non-uniform parallel loops on highly parallel multicomputers
S. Orlando, Venice University and R. Perego, CNUCE-CNR.

INTREPID: An environment for debugging concurrent object-oriented programs
B. Ramkumar, University of Iowa.

Session V-B
Vision and Graphics
Chair: Lalit Patnaik
Indian Institute of Science

A texture mapped interactive volume visualization and walk-through on
shared-memory multiprocessor systems
G. Madhusudan and Edmond, Indian Institute of Science.

Parallel algorithms for symmetry detection
R. Parthiban, Indian Institute of Technology, Delhi, R. Kakarala, J. Sivaswamy,
University of Auckland, and C. Ravikumar, Indian Institute of Technology, Delhi.

A MIMD algorithm for constant curvature feature extraction using curvature
based data partitioning
S. Chaudhary and A. Roy, Indian Institute of Technology, Delhi.

Interactive volume visualization of vector data on the ONYX graphics
Aaj and Prakash, Indian Institute of Science.

Dynamic load balancing for raytraced volume rendering on distributed
memory machines
S. Goil and S. Ranka, Syracuse university.

Parallel processing for graphics rendering on distributed memory
Tong-Yee Lee and C. Raghavendra, Washington State University.

Heterogeneous parallel implementation of a neural model for pattern
P. Olmos-Gallo and M. Misra, Colorado School of Mines.

Session V-C
Interconnection Networks
Chair: R. K. Shyamsundar
Tata Inst. of Fundamental Research

Balanced spanning trees in complete and incomplete star graphs
T. Chen, National Central University, Y. Tseng, Chung-Hua Polytechnic Institute,
and J. Sheu, National Central University.

Embedding torus on the star graph
D. Saikia, R. Badrinath, and R. Sen, Indian Institute of Technology, Kharagpur.

Fractal graphs: A new class of self-similar network topologies
M. Ghosh, D. Das, B. Bhattacharya, and B. Sinha, Indian Statistical institute.

Folded supercubes
H. Lien and S. Yuan, National Chiao Tung University.

An intelligent communication network for high performance computing
H. Vonder Muehll, B. Tiemann, P. Kohler and A. Gunzinger, ETH Zurich.

The communication system of the Proteus parallel computer
A. Sansano and A. Somani, University of Washington.

A general purpose interconnection network interface for the PCI bus
S. Hangal, L. Bhasin, A. Ranade, and P. Shekokar, Centre for Development of
Advanced Computing.

Industrial Track Session II


Future Directions in High Performance Computing

Panel Coordinator: Vipin Kumar; University of Minnesota

Panelists: Gul Agha, University of Illinois, Urbana Champaign
Arvind, Massachusetts Institute of Technology
Vijay Bhatkar; Centre for Development of Advanced Computing
Tom Casavant, University of Iowa
Jagdish Chandra, Army Research Office, USA
Pankaj Jalote, Indian Institute of Technology Kanpur
David K. Kahaner, Asian Technology Information Program Japan
Lalit Patnaik, Indian Institute of Science
David Probst, Concordia University
Kirit Yajnik, CSIR Centre for Mathematical Modeling and Computer Simulation

The past decade has seen an extensive activity in the field of parallel
computing. Many national and international efforts have been funded on a
large scale with the promise that the tera-flop (and potentially peta-flop)
computing power offered by parallel computers will open new frontiers in
applications of computers, and will fundamentally change the methods for
design and manufacturing used in a variety of industries. Most notably,
the HPCC program in the USA, and related programs in Europe and many Asian
countries were launched to accelerate research in parallel computing. How
far have we come along in these 10 years in delivering on the promises?
How come parallel computing is still done primarily in research labs, and
remains at the fringe of computing. What are the major hurdles to be
crossed before parallel computing will make a major impact on
the computing scene? How does the imminent availability of national and
international information superhighways change the role parallel computing
will play in the next decade?

This panel will address the extent to which HPCC has delivered on its
promises, and the advances needed to face the computing challenges of the
next decade and beyond.


Keynote Address

The Parallel Software Crisis - "the Problem" or Just a Symptom?
Uzi Vishkin
University of Maryland and Tel Aviv University

Session VI-A
Parallel Programming
Chair: Tom Casavant
University of Iowa

Towards a scalably efficient implementation of an implicitly parallel mixed
R. Jagannathan, SRI International.

PROMOTER: A high-level, object-parallel programming language
W. Giloi, M. Kessler, and A. Schramm, RWCP Massively Parallel Systems GMD Laboratory

A graphical approach to performance-oriented development of parallel
G. Ribeiro-Justo, University of Westminster.

On the design of parallel functional intermediate languages
A. Ben-Dyke and T. Axford, University of Birmingham.

A framework for specifying concurrent C++ programs
D. Roegel, CRIN-CNRS and INRIA-Lorraine.

Language based parallel program interaction: The breezy approach
D. Brown, A. Malony, and B. Mohr, University of Oregon.

A parallel programming model integrating event parallelism with task,
control, and data parallelism
A. Radiya, Wichita State University, V. Dixit-Radiya, Ohio State University,
and N. Radia, IBM Corp.

Session VI-B
Algorithms II
Chair: C. Pandu Rangan
IIT, Madras

A fast parallel algorithm for polynomial interpolation using Lagrange's
S. Gupta, D. Das, and B. Sinha, Indian Statistical Institute.

Efficient sorting on the multi-mesh topology
M. De, D. Das, M. Ghosh, and B. Sinha, Indian Statistical Institute.

Hypergraph model for mapping repeated sparse matrix vector product
computations onto multicomputers
U. Catalyurek and C. Aykanat, Bilkent University.

Mapping for parallel sparse Cholesky factorization
K. Eswar, C. Huang, and P. Sadayappan, Ohio State University.

Efficient FFT computation using star-connected processing elements
H. Kim, Kangwon National University, and J. Jang, Sogang university.

Parallel CSD-coding and its generalization
S. Das, University of North Texas and M. Pinotti, Instituto di Elaborazione della

Givens and Householder reductions for linear least squares on a cluster of
O. Egecioglu and A. Srinivasan, University of California, Santa Barbara.

Session VI-C
Chair: Alex Pothen
Old Dominion University

Routing algorithms for torus networks
J. Upadhyay, V. Varavithya, and P. Mohapatra, Iowa State University.

On a general method for performing permutation in interconnection networks
W. Lai, National Sun Yat-Sen University and G. Adams, Purdue university.

Fast routing on weak hypercubes
B. Juurlink, Leiden University, and J. Sibeyn, Max-Planck Institut.

Shortest path routing in supertoroids
F. Wu, S. Lakshmivarahan, and S. Dhall, University of Oklahoma.

Multipath routing in high-performance networks
G. Singh and K. Srinivasan, Kansas State university.

An efficient fault tolerant routing scheme for two-dimensional meshes
V. Varavithya, J. Upadhyay, and P. Mohapatra, Iowa State university.

Routing and scheduling I/O transfer on wormhole-routed mesh networks
B. Narahari, George Washington University, S. Shende, University of
Nebraska, R. Simha, College of William and Mary, and S. Subramanya,
George Washington University.

Tutorial 5

Massively Parallel Architectures: Past, Present, and Future

Thomas L. Casavant, University of Iowa

Who Should Attend: This tutorial is oriented toward academics,
scientists, engineers, and users of parallel computers, and
high-performance computing systems in general. A basic understanding
of Von-Neumann architecture, inter-connection networks, languages, and
operating systems is assumed, but not absolutely necessary.

Course Description: This tutorial will overview the major
contributions to the field of parallel computer systems architecture
to date. There is a coherent presentation of materials describing
major trends in the integrated development of hardware, languages, and
software for parallel computing systems. Emphasis will be placed,
however, on hardware structures for parallel system architectures.
O.S. and language issues are also addressed, but primarily in the
context of parallel systems that introduced them simultaneously with
the hardware. The tutorial is related to a book recently prepared for
the IEEE Computer Society Press entitled "Parallel Computing Systems:
Theory and Practice." The audience for this tutorial includes both
academia and industry. The tutorial is organized as follows: (1)
Context and Background. (2) Broad Survey of Parallel Systems. Brief
summaries of the most important 20 machines over the last 3 decades
from the Illiac through the Tera system are given. Machines covered
include Illiac IV, MPP, HEP, PASM, TRAC, NYU Ultra, Cosmic Cube, CM-2,
Intel iPSC/1, nCUBE-1, BBN Butterfly, Sequent, Jack Dennis' Dataflow
work, Monsoon, Multiflow, Alliant, Cedar, nCUBE-2, J-Machine, Alewife,
DASH, Tera, DDM, MasPar, Meiko CS-2, Convex SPP, Cray T3D. (3)
Extended Case Studies. In depth looks at a smaller set of machines
including: iPSC/2 and iPSC/860, Thinking Machines CM-5, KSR1 and KSR2,
Intel Paragon, Seamless. (4)Trends in Software Environments. (5)
General Trends, Directions, and Conclusions.

Lecturer: Thomas Lee Casavant received the B.S. degree in Computer
Science with high distinction in 1982 from the University of Iowa,
Iowa City, Iowa. He received the M.S. degree in Electrical and
Computer Engineering in 1983, and the Ph.D. degree in Electrical and
Computer Engineering from the University of Iowa in 1986. In 1986, Dr.
Casavant joined the School of Electrical Engineering at Purdue
University, West Lafayette, Indiana as an Assistant Professor
specializing in the design and analysis of parallel/ distributed
computing systems, environments, algorithms, and programs. From June
1987 to June 1988, he was Director of the PASM Parallel Processing
Project. From July 1988 until July 1989 Dr. Casavant was Director of
the Purdue EE School's Parallel Processing Laboratory. In August,
1989, he joined the faculty of the Department of Electrical and
Computer Engineering at the university of Iowa as an Assistant
Professor and was promoted to Associate Professor in 1992. There, he
is director of the Parallel Processing Laboratory. Dr. Casavant has
published over 50 technical papers on parallel and distributed
computing and has presented his work at tutorials, invited lectures,
and conferences in the United States, Asia, Russia, and Europe. He
has been invited to lecture on topics spanning the fields of computer
systems and parallel processing in general. Dr. Casavant is a Senior
Member of the Institute of Electrical and Electronics Engineers (IEEE)
professional society and a member of the Association of Computing
Machinery (ACM) professional society. He is serving on the editorial
board of IEEE Transactions on Parallel and Distributed Systems (TPDS)
and is an associate editor for the Journal of Parallel and Distributed
Computing (JPDC).

Tutorial 6

Graph Partitioning for Parallel Computing

Alex Pothen, Old Dominion University

Who Should Attend: This tutorial will be most useful for computer
scientists, computational scientists and engineers, and professional
applications programmers, who need to solve computational problems governed
by sparse, irregular graphs on parallel computers. In addition, it will
also be interesting to those who design algorithms and develop software for
task or data partitioning in parallel computing.

Course Description: Context: Computational problems governed by graphs and
finite element meshes. The load-balancing problem, identifying parallelism,
task and data partitioning, mapping tasks/data to processors; Criteria for good
partitions: load balance, various measures of communication costs, aspect
ratios of subgraphs, front-widths; Partitioning Methods: Geometric algorithms
(inertial methods, Provably good algorithms for finite element problems),
Local-search methods (Kernighan-Lin, Simulated Annealing, Genetic Algorithms),
Spectral algorithms (Laplacian matrix, the min-cut problem, quadratic
assignment formulation, lower bounds, theoretical justification),
Multilevel algorithms; Software tools for partitioning; Applications: The
load-balancing problem, run-time environments and identification of
parallelism, Computational Fluid Dynamics (Euler equations) Nested
Dissection Orderings, Domain Decomposition.

Lecturer: Alex Pothen is an associate professor of Computer Science at Old
Dominion University, and at the Institute for Computer Applications in
Science and Engineering (CASE) at NASA Langley Research Center. His
research interests are in parallel computing, combinatorial algorithms,
and in computational linear algebra. He has previously held appointments at
the Computer Science departments at the University of Waterloo and at the
Pennsylvania State University. His work on spectral algorithms for
partitioning graphs and for various vertex ordering problems on graphs has
led to their widespread use in many application areas and to the discovery
of new algorithms. His research is funded by the National Science
Foundation, the U.S. Department of Energy, NASA, and IBM.

Local Sightseeing Tour



The meeting will be held at the Habitat World at the India Habitat
Centre. {Tel: +91-(11)-4691920}. It is located on the Lodhi Road at Max
Muller Marg and is adjacent to Bal Bharati Air Force School.

About Delhi:
New Delhi, the venue of HiPC '95, is India's capital. It is in
north-central India on the banks of the Yamuna river, the chief
tributary of the Ganges. Delhi, comprising of New Delhi and Old
Delhi, is a bustling metropolis of nine million people. Delhi has
been the capital to a succession of mighty empires, including the
Moghuls. It is home to a number of historical monuments such as Qutab
Minar, Red Fort, and Jantar-Mantar. Other places of interest in Delhi
are the Parliament Building, and shopping centres such as Connaught
Place and Chandni Chowk. Agra, site of the Taj Mahal, and the
historical ruins of Fatehpur Sikri are close and can be visited on a
day long conducted tour from Delhi.

Further information on Delhi can be found through the Web at the URL,
There is also a newsgroup, soc.culture.indian.delhi, which could also
be a source for more information.

Visa and Passport:
All participants who are not citizens of India must obtain a valid
visa from Indian Consulates or High Commissions. The procedure may
take some time, consult with your travel agent in advance.

The currency is the Indian Rupee. The conversion rate at the time of
this publication is 1 US $ to Rs. 31.40. Credit cards are accepted in
most luxury hotels but not in most commercial establishments. The
Reserve Bank of India may have certain restrictions on converting
Rupees to other currencies. For details, check with an Indian Consulate
or your travel consultant.

Time and Weather:
The Indian Standard Time(IST) is 5 1/2 hours ahead of the Greenwich
Mean Time(GMT) and is 13 1/2 hours ahead of the U. S. Pacific Standard
Time(PST). In late December, the daytime temperatures are comfortable
(70 degrees F / 20 degrees C) while the nightime temperatures are chilly
(45 degrees F / 7 degrees C). Jackets/sweaters are recommended.

Almost all major international carriers fly to New Delhi. It is
advisable to make travel plans(including airline and hotel
reservations) as early as possible since travel to and from India is
very heavy during the months of December and January. The meeting does
not endorse any travel agency, however, to assist international
travelers with late airline reservations a block of seats has been
reserved. You may contact Globalink Travels in the Los Angeles area
at +1 818-972-9525 for details.

The Le Meridien New Delhi is offering a special rate of US $ 152 (per night,
single) US $ 159 (per night, double) for meeting participants. The
rate includes all taxes. It is less than 20 minutes by a taxi-ride
from the Habitat World. The hotel has a 24 hour business center which
has a comprehensive range of facilities. The airport is approximately
45 minutes by a taxi-ride from the hotel. The cost of the ride to or
from the airport can vary, and a one-way taxi-ride will cost around US
$10 including tip.

There are a number of alternate hotels near the meeting venue.
However, the quality of service can vary dramatically. Check with
your travel consultant in advance to ensure a comfortable stay.

                                                        December 27-30, 1995
                                              Habitat World, New Delhi, India

                                                            Advance Registration Form

        Name: ____________________________________________________________________
                          Last/Family First M.I. Name on Badge




        Phone (day time):__________________________Fax:______________________

        IEEE Membership Number:__________________E-Mail:__________________________

        Dietary needs:________Vegetarian_______ Spicy

        . . . Conference Registration Fees:
                                                    IEEE-Member Non-Member Student
                                                    US$/Rs. US$/Rs. US$/Rs.
Advance Registration 175/5250 200/6000 175/5250
(until November 25, 1995)
On-site Registration 200/6000 225/6750 200/6000

The registration fee includes a copy of the hardcover proceedings,
lunches, and refreshments on December 28, 29, and 30, Conference
Banquet and a local sightseeing tour on December 30. Conference
registration fee does not include participation in the tutorials.
Tutorials are open to conference registrants only.

        . . . Tutorial Registration Fees:(per tutorial)
                                                      IEEE-Member Non-Member Student
                                                      US$/Rs. US$/Rs. US$/Rs.
Advance Registration 25/750 30/1000 25/750
(until November 25, 1995)
On-site Registration 30/1000 40/1250 30/1000

The tutorial registration fee includes participation in the
tutorial, a copy of the tutorial notes and refreshments.

Tutorial 1 ___ Tutorial 2 ___ Tutorial 3 ___
Tutorial 4 ___ Tutorial 5 ___ Tutorial 6 ___

          Conference Registration fee ____________________
          Tutorial Registration fee ____________________
          Total Amount Enclosed:___________________________

Payment must be enclosed. Please make cheques payable to
International Conference on High Performance Computing. All cheques
MUST be either in U.S. Dollars drawn on a U.S. Bank or in Indian Rs.
drawn on an Indian bank. Participants currently residing in India may
pay in Indian Rs., all others (including NRIs) must pay in US Dollars.
Written requests for refunds must be received (by the appropriate
Finance Co-Chair) no later than Nov. 25, 1995. Refunds are subject
to a US $50 (Rs. 1500) processing fee. All no-show registrants will
be billed in full. Registration after November 26, 1995 will be
accepted on-site only. Please do not send this registration form to
the General Co-Chairs or to the Program Chair.

Please mail to:

HiPC '95, c/o Suresh Chalasani HiPC '95, c/o A. K. P. Nambiar
ECE Dept, 1415 Johnson Dr. or to: C-DAC
University of Wisconsin 2/1, Brunton Road
Madison, WI 53706-1691, USA Bangalore, 560025, India
Email: suresh@ece.wisc.edu Email: nambiar@cdacb.ernet.in

Participants currently residing in India are requested to send their
completed registration form to Mr. Nambiar, all others are requested
to send it to Professor Chalasani. Scholarships to a) full time
students and b) faculty at academic institutions in India and to
researchers at Indian government establishments in India may be
available from agencies within India. For details contact Mr. A K P
Nambiar, C-DAC, Bangalore(email: nambiar@cdacb.ernet.in). These
scholarships are not available to participants from non-Indian


Please reserve accommodations for:

Name: ___________________________________________________________________

Mailing Address: ________________________________________________________


Home/business phone: ____________________________________________________

Arrival Date: ________________ Single _____________ Double ______________

Arrival Time: ________________ Departure Time: __________________________

The HiPC group rates are Le Meridien New Delhi are: US $152 (per single)
$159 (per double) inclusive of taxes.

Rooms at group rates are available for December 25 - 31, 1995.
Reservation must be made by November 15, 1995. No reservations will be
made at these rates for these dates after November 15. A one night
non-refundable room deposit is required to hold a room and must be
paid with a major credit card. Cancellations made less than 72 hours
prior to arrival will be charged for the full amount of the stay.
Check-in/check-out time is 12:00PM. Rooms will be held until 6:00PM on
the day of arrival unless otherwise specified by the client.

Reservations should be FAXED to the Forte Hotels sales office in New
York, attention: Annette Miller! at +1-212-697-1445. Phone toll free
in the US: +1-800-521-1733, direct +1-212-805-5000 ext. 235

Name of credit card (VISA, MC, AMEX): _____________________________

Card Number: ________________________ Exp. Date: __________________

Signature of Cardholder: __________________________________________

* As a reminder, the one night room deposit is NON-REFUNDABLE.


On December 31, a day trip is being planned to visit Agra, site of the
Taj Mahal. This trip is not part of the HiPC meeting and participation
is optional. The travel arrangements will be made by Travel House, New
Delhi. The cost of the trip is US $ 50 (Rs. 1500) per person. The fee
includes transportation (by train) from Delhi to Agra and back, local
transportation in Agra (by tour bus), admission fees at the monuments
and a lunch. The fee does not include local transportation in Delhi to
and from the train station.

This announcement is being made to facilitate advance planning. To
allow adequate time to make reservations, register before November 1,
1995. For additional details, contact Professor Ajay Gupta at
gupta@cs.wmich.edu or at (616) 387-5653.

Department of Computer Science E-mail : gupta@cs.wmich.edu
Western Michigan University Voice : (616) 387-5653
Kalamazoo, MI 49008, USA Fax : (616) 387-3999


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.