Related articles |
---|
Linking time sanjay_jha@email.msn.com (Sanjay Jha) (1999-04-18) |
Re: Linking time dwight@pentasoft.com (1999-04-19) |
Re: Linking time mikey@sparcbert.mlee.ontek.com (1999-04-20) |
Re: Linking time zalman@netcom.com (1999-04-21) |
Re: Linking time gopi@sankhya.com (1999-04-21) |
From: | mikey@sparcbert.mlee.ontek.com (Mike Lee) |
Newsgroups: | comp.compilers |
Date: | 20 Apr 1999 02:29:13 -0400 |
Organization: | Ontek Corporation -- Laguna Hills, California |
References: | 99-04-052 |
Keywords: | linker, performance |
In comp.compilers, "Sanjay Jha" <sanjay_jha@email.msn.com> writes:
| I have a 100MB static library that I need to link to. Though my exe
| uses only a few of the classes from the library, the sheer size of the
| library makes the link time as high as 30 minutes on my NT desktop. Is
| there any way I can reduce this link time. I do have the source code
| for the library and can actually build it myself. Though I can build
| only the classes that I need as a separate library but that won't work
| for everybody. There are many other developers on the project that
| have the same problem.
Why is the library 100MB? Even by bloatware standards, that's
pretty big. Some thoughts spring to mind:
* Divide and conquer. Even if the final applications all
require the entire library, splitting the library up into
sensible divisions can make the following items more
manageable. Since you've already mentioned classes,
there should be some obvious ways to rearrage the code
into multiple, smaller libraries.
* Don't link statically. Everything that's debugged and working
can go into dynamic libraries. What I sometimes do is have a
debuggable static library and a nondebug dynamic library
so that individual test projects can link statically to
the debug versions of only the libraries they need to
debug or step into.
* Redundant debug information: I don't know anything about
your linker/compiler/IDE but it's not unheard of for a
linker to fail to coalesce definitions of types, structs,
enums, from different .c files but which all originally
came from the same .h file. It's also possible that
debug info for all visible types (etc) are being carried
through the link even for functions which do not have
access to those types. Your compiler vendor may be able to
clarify how much debug info is making it through each
stage.
* Large initialized data segments. Move out the big chunks
and put them into a resource manager or just read them
from a file, as appropriate. (This alone probably won't
change very much though; the linker basically just copies
the bytes for initialized data.)
* For C or C++, identify all the "used only in one place" functions
and make them static or private. The more you reduce the
scope, the more the compiler can do stuff ahead of time so
the linker won't have to.
* C++ templates and inlining can bring a linker to its knees.
Inlining can be turned off pretty easily, but overuse of
templates is basically a design fault (in my humble opinion.)
* If the library is a 3rd party item, maybe check out some
of their competition for more compact library design.
* Finally, evaluate your own applications for bloat: don't
embed a browser and a word processor and a drawing program
and an email client and a web server into your application--use
IPC is to communicate between smaller, functionally separate
applications.
| Actually If I can find out what is that takes more time, disk I/O or
| CPU, I can try to do something about it.
I don't suppose there is some equivalent to the unix time(1) command
available? Anyway, I'm not sure this line of investigation leads
anywhere but to purchasing faster hardware. The next release
of the library will be probably be sized on the assumption of
these upgrades and you'll be back where you started.
mikey
Return to the
comp.compilers page.
Search the
comp.compilers archives again.