Related articles |
---|
[4 earlier articles] |
Re: deadcode optimization guerby@acm.org (Laurent Guerby) (2001-03-01) |
Re: deadcode optimization stonybrk@fubar.com (Norman Black) (2001-03-04) |
Re: deadcode optimization fjh@cs.mu.OZ.AU (2001-03-08) |
Re: deadcode optimization tgi@netgem.com (2001-03-08) |
Re: deadcode optimization rog@vitanuova.com (2001-03-08) |
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-03-10) |
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-03-10) |
Re: deadcode optimization fjh@cs.mu.OZ.AU (2001-03-10) |
Re: deadcode optimization fjh@cs.mu.OZ.AU (2001-03-12) |
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-03-14) |
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-03-14) |
Re: deadcode optimization broeker@physik.rwth-aachen.de (Hans-Bernhard Broeker) (2001-03-22) |
Re: deadcode optimization marcov@stack.nl (Marco van de Voort) (2001-04-04) |
[2 later articles] |
From: | "Norman Black" <stonybrk@ix.netcom.com> |
Newsgroups: | comp.compilers |
Date: | 10 Mar 2001 15:53:19 -0500 |
Organization: | Stony Brook Software |
References: | 01-03-012 01-03-022 01-03-034 01-03-050 |
Keywords: | code, optimize |
Posted-Date: | 10 Mar 2001 15:53:19 EST |
Compilation times are not really increased, at least in our
compiler(s). Not any time you can measure by hand with a
stopwatch. Yes the files are bigger but so what. This increase is not
very significant.
Link times do increase however. A fast efficient linker easily offsets
this. Few of these exist. Some of the old DOS linkers such as Optlink
and Blinker were insanely fast. Years ago we benchmarked ourselves
against them. But those were the good old 16-bit days. Our linker is
highly tuned since you link every time to do a test, where you usually
compile only one or two files per make.
For library code, statically linked, compiling to archive files is
useful. Library code ultimately going into a DLL/shared object it is
not useful. For applications no it is not really useful, since why
would app specific modules have unused code.From users comments
thought many do.
These days I am not sure how useful compiling to archive files is, but
our development system dates back to 1987, good old 640k DOS days,
where every byte of memory was precious. The code ands nothing really
to the compiler. A few hundred lines. I should note that our compilers
are fully self contained. Source goes into, object comes out. Nothing
"intermediate" is generated (meaning written to disk).
--
Norman Black
Stony Brook Software
"Fergus Henderson" <fjh@cs.mu.OZ.AU> wrote in message
> I think the main reason is that typical object file formats and
> library formats have a lot of overhead per object file. Compiling
> each function to its own object file results in much larger object and
> library files, and as a result can also increase compilation and link
> times. I implemented the same kind of thing in the Mercury compiler
> in 1996, but it remains an option, not the default, because of the
> large increase in the intermediate file sizes and in compilation /
> link time. It's useful if you're cross-compiling to some small-memory
> system, or if for some other reason you're very concerned about
> reducing code size, but otherwise it's not worth the hassle.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.