Stanford Compiler Optimization Seminar

barnhill@Hudson.Stanford.EDU (Joleen Barnhill)
26 Jul 1996 23:25:47 -0400

          From comp.compilers

Related articles
Stanford Compiler Optimization Seminar barnhill@Hudson.Stanford.EDU (1996-07-26)
| List of all articles for this month |

From: barnhill@Hudson.Stanford.EDU (Joleen Barnhill)
Newsgroups: comp.compilers
Date: 26 Jul 1996 23:25:47 -0400
Organization: Stanford University
Keywords: courses

The Western Institute of Computer Science announces a week-long course on:

Compiler Optimizations: A Quantitative Approach
  August 12-16, 1996
  Krishna V. Palem and Vivek Sarkar

The goal of this course is to use a quantitative framework as a basis for an
in-depth study of state-of-the-art code optimization techniques used in
compilers for modern processors. The performance gap between optimized and
unoptimized code continues to widen as modern processors evolve. This
evolution includes hardware features such as superscalar and pipelined
functional units, improved lookahead across branches, one or two levels of
cache, and larger numbers of registers. These hardware features are aimed at
yielding high performance, and are critically dependent on the code being
optimized appropriately to take advantage of them. Rather than burdening the
programmer with this additional complexity during code development, modern
compilers offer automatic support for code optimization.

The course is self-contained and provides a detailed view of the design of
optimizing compilers, including discussion of the rationale behind important
design decisions. A typical optimizing compiler contains a sequence of
restructuring and code optimization techniques, which are covered in detail;
issues related to interactions among individual optimizations are also
discussed. Examples of optimizations and accompanying performance
improvements will be highlighted via key examples and case studies.

The course also includes a discussion the impact of the source languages and
the target architectures on the effectiveness of optimizations. The
optimizations covered are most relevant for RISC processors, such as the DEC
Alpha, HP PA-RISC, IBM RS/6000, PowerPC, SGI-MIPS, and SUN Sparc, and
third-generation programming languages such as C and Fortran. Some recent
optimizations for object-oriented languages such as C++ are also included.

The course is relevant to systems programmers and analysts, scientific
programmers, computer designers, technical managers, mathematics and computer
science teachers, or anyone facing the prospect of building, modifying,
maintaining, writing or teaching about compilers. At the conclusion of the
course, the students should be knowledgeable about optimization techniques
used in modern compilers, from the viewpoint of the compiler user as well as
of the compiler designer.

Text: Compiler Design: Principles, Techniques and Tools, Aho, et. al. and
lecture notes.

Tuition: $1575. Early bird discount, 14 days in advance $1450.

Course Outline

1. Structure of optimizing compilers: front-end, intermediate languages,
optimization phases, code generation, linker, runtime libraries

2. Quantitative principles of code optimization: execution frequencies and
profiling, completion time of schedules, initiation interval of pipelined
loops, spill costs and register pressure, amortized memory access costs,
run-time measures.

3. Intermediate language design: dictionary, quadruples, high-level and
low-level intermediate languages, industry examples.

4. Control flow graphs: structured vs. unstructured, acyclic vs. cyclic,
reducible vs. irreducible, dominators, postdominators.

5. Data and control dependence: data and control dependence, program
dependence graphs, def-use/use-def/def-def chains, static single assignment

6. Instruction scheduling for pipelined and superscalar processors: basic
blocks and list scheduling, priority and rank functions, global scheduling,
speculative scheduling, software pipelining.

7. Register allocation: live ranges, interference graphs, graph coloring,
local and global allocation, register spills.

8. Global data flow analysis: problem formulation, data flow equations,
forward vs. backward data flow analysis problems, constant propagation, value

9. Array dependence analysis: data dependence tests, direction vectors,
distance vectors.

10. Loop transformations: loop distribution, fusion, interchange, unrolling.

11. Optimization for memory hierarchies: overview of memory hierarchies,
cost functions for estimating data cache/TLB utilization, transformations for
improved data locality, effect of set conflicts (collision misses), code
restructuring for improved instruction locality.

12. Interprocedural analysis and optimization: interprocedural data flow
analysis, constant propagation, inlining, cloning.

13. Symbolic debugging of optimized code: optimization levels vs. debug
levels, breakpoints and safepoints, debug tables, industry example.

14. C++ performance characteristics and optimizations: dynamic performance
characteristics of C++ programs, examples of C++ optimizations.


DR. KRISHNA V. PALEM has been an Associate Professor of Computer Science in
the Courant Institute of Mathematical Sciences, NYU, since September 1994.
Prior to this, he was a research staff member at the IBM T. J. Watson
Research Center, and an advanced technology consultant on compiler
optimizations at the IBM Santa Teresa Laboratory working on parallel and
optimizing compiler technologies. He is an expert in the area of compilation
and optimization for superscalar and parallel machines. He has been invited
to teach short courses and lecture internationally on these subjects, and
has edited and contributed to journals and books in these areas. His current
research interests revolve around using profiling information to guide global
optimizations in general, with emphasis on scheduling. At NYU, he also leads
the CoRReT project, aimed at developing programming tools and compiler
optimizations for rapid prototyping of real-time programs. At IBM, he worked
on developing a quantitative framework for characterizing optimizations in
product-quality compilers for superscalar RISC machines.

DR. VIVEK SARKAR is a Senior Technical Staff Member in the IBM Software
Solutions Division, Manager of the Application Development Technology
Institute (ADTI), and a member of the IBM Academy of Technology. He is the
author of several papers in the areas of program optimization and compiling
for parallelism, as well as the book titled Partitioning and Scheduling
Parallel Programs for Multiprocessor Execution. At IBM, he worked on the
PTRAN research project from 1987 to 1990, on new technologies for automatic
parallelization and program optimization. From 1991 to 1993, he led a
development project to build a new compiler product component with a
high-level program transformation system for optimizing locality and
parallelism in uniprocessor and multiprocessor systems. Since becoming
Manager of ADTI in 1994, he has supervised and worked on various AD
technology projects at IBM, including automatic shared-memory
parallelization, retargetable optimizing back-ends, and C++ optimizations.

Compiler Optimizations: A Quantitative Approach
August 12-16, 1996

Registration on or before July 29
[ ] Compiler Optimizations: A Quantitative Approach $1,450
Registration after July 29
[ ] Compiler Optimizations: A Quantitative Approach $1,575









Work Phone (________)___________________

Home Phone (________)___________________

Electronic Mail address __________________________

  on network _____________________

Total amount enclosed: $___________

Method of payment
[ ] Check enclosed (payable to WICS)

[ ] Visa/Mastercard #________________________________
card exp. date__________

cardholder signature___________________________________________________

[ ] Bill my company. Purchase Order #__________________________
                Write billing address below.

Return registration form with payment to:
Western Institute of Computer Science
P.O. Box 1238
Magalia, CA 95954-1238

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.