Re: Interprocedural optimization and code reuse

pardo@smelt.cs.washington.edu (David Keppel)
Tue, 2 Jul 91 18:10:58 GMT

          From comp.compilers

Related articles
Interprocedural optimization and code reuse ssr@stokes.princeton.edu (1991-06-25)
Re: Interprocedural optimization and code reuse pardo@smelt.cs.washington.edu (1991-07-02)
Re: Interprocedural optimization and code reuse rfrench@neon.Stanford.EDU (1991-07-02)
Re: Interprocedural optimization and code reuse tseng@rice.edu (1991-07-03)
Re: Interprocedural optimization and code reuse rbe@yrloc.ipsa.reuter.COM (1991-07-03)
Re: Interprocedural optimization and code reuse pardo@sturgeon.cs.washington.edu (1991-07-03)
| List of all articles for this month |
Newsgroups: comp.compilers
From: pardo@smelt.cs.washington.edu (David Keppel)
Keywords: optimize, design
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: 91-07-007
Date: Tue, 2 Jul 91 18:10:58 GMT

ssr@stokes.princeton.edu (Steve S. Roy) writes:
>[Good optimization vs. portability: code reusability and machine
> depdendence. As long as people worry about details of each machine,
> they will produce nonportable source code.]


Agreed, it is a problem. I don't have any solutions but would like to
outline the space of the problem a little bit:


There are a variety of `easy' cases:


* Machine-independent optimizations.
* Machine-dependent (e.g., dimensions for blocking) where the
    machine-dependent parameters can be provided automatically by the
    compiler.
* Code that is optimized by careful hand-coding, but where the machine
    characteristics can be provided by a machine definition file. An
    example would be a series of three loops 1..N that are replaced by
    three 1..cachesize loops inside an outer loop, so all three loops run
    on one cached set of data, then another set of cached data. The
    cache size varies from machine to machine but can be provided easily
    during e.g., compilation.


As for the other 90% of the code...


* Currently, it is a problem simply to model architectures to know what
    information should be supplied to the users (programs being compiled)
    by a generic machine definition.
* It is a second problem to determine e.g., what kinds of data access
    patterns will be made by a given code fragment.
* It is a third problem to resolve the second against the first.


All three of these problems can be considered open.


(IMHO; opinion alert:)


Most machine-dependent optimizations that look at large code fragments
optimize data accesses. Relatively few of the `large' optimizations
look at e.g., data values and control flow.


;-D on ( Thinking aloud in silence ) Pardo
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.