|Interprocedural optimization and code reuse firstname.lastname@example.org (1991-06-25)|
|Re: Interprocedural optimization and code reuse email@example.com (1991-07-02)|
|Re: Interprocedural optimization and code reuse rfrench@neon.Stanford.EDU (1991-07-02)|
|Re: Interprocedural optimization and code reuse firstname.lastname@example.org (1991-07-03)|
|Re: Interprocedural optimization and code reuse email@example.com.COM (1991-07-03)|
|Re: Interprocedural optimization and code reuse firstname.lastname@example.org (1991-07-03)|
|From:||email@example.com (David Keppel)|
|Organization:||Computer Science & Engineering, U. of Washington, Seattle|
|Date:||Tue, 2 Jul 91 18:10:58 GMT|
firstname.lastname@example.org (Steve S. Roy) writes:
>[Good optimization vs. portability: code reusability and machine
> depdendence. As long as people worry about details of each machine,
> they will produce nonportable source code.]
Agreed, it is a problem. I don't have any solutions but would like to
outline the space of the problem a little bit:
There are a variety of `easy' cases:
* Machine-independent optimizations.
* Machine-dependent (e.g., dimensions for blocking) where the
machine-dependent parameters can be provided automatically by the
* Code that is optimized by careful hand-coding, but where the machine
characteristics can be provided by a machine definition file. An
example would be a series of three loops 1..N that are replaced by
three 1..cachesize loops inside an outer loop, so all three loops run
on one cached set of data, then another set of cached data. The
cache size varies from machine to machine but can be provided easily
during e.g., compilation.
As for the other 90% of the code...
* Currently, it is a problem simply to model architectures to know what
information should be supplied to the users (programs being compiled)
by a generic machine definition.
* It is a second problem to determine e.g., what kinds of data access
patterns will be made by a given code fragment.
* It is a third problem to resolve the second against the first.
All three of these problems can be considered open.
(IMHO; opinion alert:)
Most machine-dependent optimizations that look at large code fragments
optimize data accesses. Relatively few of the `large' optimizations
look at e.g., data values and control flow.
;-D on ( Thinking aloud in silence ) Pardo
Return to the
Search the comp.compilers archives again.