|global optimizing compilers vs. unix/make/rcs development methodology firstname.lastname@example.org (1993-08-01)|
|Re: global optimizing vs. unix/make/rcs email@example.com (1993-08-01)|
|Re: global optimizing compilers vs. unix/make/rcs development methodol firstname.lastname@example.org (1993-08-02)|
|Re: global optimizing compilers vs. unix/make/rcs development methodol email@example.com (1993-08-02)|
|global optimizing compilers vs. unix/make/rcs development methodology firstname.lastname@example.org (1993-08-02)|
|Keywords:||tools, question, optimize|
|Organization:||University of Southern California, Los Angeles, CA|
|Date:||Sun, 1 Aug 1993 07:49:41 GMT|
Current unix/make/rcs software developent methodology generally encourages
a style of coding in which huge computer programs and systems are broken
up into a large number of very small source modules which may be
incrementally compiled following source code updated.
Unfortunately, this general procedure makes it impossible for any globally
optimizing compiler to see any more than a small part of the program which
it is supposed to be globally optimizing at any one time. So, we lose out
on the potential for inlining (cloning) of procedures/functions which are
not contained within the present source code file.
Is anyone working on updating make or developing compilers which are able
do global optimization of procedures spanning huge source code trees? How
is such optimization accomplished by existing and/or planned compilers?
Is anyone working on updating gcc to globally optimize source code
contained through a huge tree of source code?
Neuroscience Image Analysis Network
HEDCO Neuroscience Building, Fifth Floor
University of Southern California
Los Angeles, CA 90089-2520
Return to the
Search the comp.compilers archives again.