Re: optimizing

anton@mips.complang.tuwien.ac.at (Anton Ertl)
Sat, 13 Aug 2011 12:38:43 GMT

          From comp.compilers

Related articles
optimizing gah@ugcs.caltech.edu (glen herrmannsfeldt) (2011-08-12)
Re: optimizing anton@mips.complang.tuwien.ac.at (2011-08-13)
Re: optimizing haberg-news@telia.com (Hans Aberg) (2011-08-13)
Re: optimizing walter@bytecraft.com (Walter Banks) (2011-08-14)
Re: optimizing torbenm@diku.dk (2011-08-15)
Re: optimizing bumens@dingens.org (Volker Birk) (2011-08-15)
Re: optimizing haberg-news@telia.com (Hans Aberg) (2011-08-15)
Re: optimizing gah@ugcs.caltech.edu (glen herrmannsfeldt) (2011-08-15)
| List of all articles for this month |

From: anton@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.compilers
Date: Sat, 13 Aug 2011 12:38:43 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
References: 11-08-015
Keywords: optimize
Posted-Date: 14 Aug 2011 20:28:07 EDT



glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
>A recent post to comp.lang.fortran on optimization reminded me of
>something I thought about some time ago. Someone was wondering if any
>optimization was done at link time. In the case of Fortran, the
>answer is usually no.
>
>It seems to me, though, that in the case of RISC, and even more in the
>case of VLIW processors like Itanium, delaying the final optimization
>and code generation pass would be useful.


I would expect that most Fortran compilers can do this, not because
it's useful in practice, but because they want to shine at SPEC CPU
benchmarketing (which contains Fortran programs).


As for the usefulness in practice: Link-time optimization increases
link time and therefore slows down development. Many projects use
dynamically linked libraries even for modules specific to the project;
one of the reasons it to avoid the slowness of static linking, and
making that even slower is out of question (my impression is that
these are usually C++ projects, though).


A frequently encountered meme in discussions is: developing with using
fast compilation options for development and slower compilation
producing better code for production. However, many compilers these
days take the liberty to produce code that behaves differently when
changing optimization options (or, as slipped from one compiler
maintainer, "miscompiling" programs with higher optimization options
(normally the compiler maintainers use some euphemism instead)), most
developers soon give up on that plan. The other reason is that the
intended production version turns out to need another change and thus
becomes another development version; or the reversal: the program is
worked on until the deadline is here; there was never any time to
compile slowly and test the result (and if there was, the test would
fail because of the miscompilation thing mentioned above, so the
developer would revert back to the development version).


For supercomputing applications, I can imagine that the disadvantages
of link-time optimizations weigh relatively less heavily, but I also
imagine that the advantages are also smaller: They tend to spend a lot
of time in inner loops, so the benefits of optimizing function call
overhead are relatively small.


>It could be done at run time, or, more likely, at program install
>time.


Both approaches have been tried: the "run-time" approach in JIT
compilers and dynamic binary translators; the "program install time"
approach in ANDF. JIT compilers are quite successful, ANDF never took
off.


- anton
--
M. Anton Ertl
anton@mips.complang.tuwien.ac.at
http://www.complang.tuwien.ac.at/anton/
[ANDF had the usual UNCOL problems. See subsequent messages for more
on successful whole program optimizers. -John]



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.