|Dead code elimination email@example.com (Steve Boswell) (1991-10-26)|
|Re: Dead code elimination firstname.lastname@example.org (1991-10-29)|
|Re: Dead code elimination email@example.com (1991-11-01)|
|Re: Dead code elimination firstname.lastname@example.org (1991-11-05)|
|Re: Dead code elimination email@example.com (1991-11-05)|
|Re: Dead code elimination firstname.lastname@example.org (1991-11-05)|
|Re: Dead code elimination email@example.com (1991-11-06)|
|Re: Dead Code Elimination firstname.lastname@example.org (1991-11-07)|
|dead code elimination email@example.com (1991-11-26)|
|dead code elimination firstname.lastname@example.org (1995-03-23)|
|From:||email@example.com (Paul Eggert)|
|Organization:||Twin Sun, Inc|
|Date:||Wed, 6 Nov 1991 04:40:34 GMT|
firstname.lastname@example.org (Clyde Smith-Stubbs) writes:
>I'd still like to know why a simple printf("Hello world\n"); when compiled
>with cc -n on my Sun produces a 94K program!
But it's only 86K stripped (:-). Aside from alignment and header overhead,
the easiest way to answer your question is to type `nm -s a.out'. You'll
get some interesting results, e.g. 37K devoted to powers of two and ten.
Most of the overhead is floating point conversion routines, which typically
can be accurate, fast, or small, or any two out the three, but not all three
simultaneously. If you look carefully, you'll find other minor
inefficiencies, but the main culprit is floating point.
To put some perspective on this, I'd guess 86K of main memory now costs less
than what 86 bytes cost back when Unix was designed (in constant dollars).
Return to the
Search the comp.compilers archives again.