50 times longer to compile than copy

smnsn@my-deja.com
4 Nov 2000 01:45:58 -0500

          From comp.compilers

Related articles
50 times longer to compile than copy smnsn@my-deja.com (2000-11-04)
Re: 50 times longer to compile than copy ian@jawssystems.com (2000-11-05)
Re: 50 times longer to compile than copy vii@penguinpowered.com (John Fremlin) (2000-11-05)
Re: 50 times longer to compile than copy s337240@student.uq.edu.au (Trent Waddington) (2000-11-05)
Re: 50 times longer to compile than copy chase@naturalbridge.com (David Chase) (2000-11-07)
Re: 50 times longer to compile than copy Sid-Ahmed-Ali.TOUATI@inria.fr (Sid Ahmed Ali TOUATI) (2000-11-07)
Re: 50 times longer to compile than copy ONeillCJ@logica.com (Conor O'Neill) (2000-11-09)
[4 later articles]
| List of all articles for this month |

From: smnsn@my-deja.com
Newsgroups: comp.compilers
Date: 4 Nov 2000 01:45:58 -0500
Organization: Deja.com - Before you buy.
Keywords: performance
Posted-Date: 04 Nov 2000 01:45:58 EST

          I've built a very large C program with 2500 variables that does
nothing more than exclusive ors and assignments, which should be fare-
ly easy to compile, assuming there are exclusive ors in the machine
language. Anyhow, I've noted that when I compile this file the exe-
cutable is roughly the same size as the source. Specifically, the
source is 729008 bytes and the executable compiled from it is 897079
bytes.


          I was thinking that there's a lower limit on the amount of time
it would take to compile my source into this executable; an optimally
fast compiler would still have to read in the source code and write
out the executable, so you can't get any faster than how much time it
takes to do that. That made me think that the amount of time it takes
to copy this file would be an interesting measure against the amount
of time it takes to compile.


          So what I did was to do fifty successive copies (Unix "cp"'s) of
this source file into fifty different destination file names, and then
did a compile of this file (using a Unix "cc") and compared how long
it took to do each one. When there weren't other people doing things
on the computers I was using to do this measure, I noticed that it
took from fifty times as long to do the compile to ninety times as
long to do the compile.


          Does anybody have any idea why my local compiler would be taking
this much time? What might the compiler be doing with all those extra
nanoseconds? I'm not sure I can expect that my local compiler might
be optimized for speed. Actually that's an interesting question in
itself. Do compilers exist that improve on this 50:1 to 90:1 ratio?


          In particular I'm curious as to how much a role in the time it
takes to compile is played by the symbol table used to compile. If we
took the amount of time spent accessing the symbol table, with an in-
put being the ASCII representation of the symbol and the output being
compilation information about that symbol, and compared that to the
total time taken to compile, would the ratio be high, like 1:2 or 2:3,
or something like that?


          I'd also be interested in extending this question to compilers in
general, not just C compilers, which is the reason I'm posting it to
"comp.compilers". Any responses to this would be appreciated. You can
post to this newsgroup or mail me directly at "simonson@nortel.com".


                                                                ---Kevin Simonson


[If your program is one huge routine, I'm not surprised that it takes
a long time to compile. Compilers almost always have some worse than
linear time parts that make very large routines very slow. It's not
likely to be the symbol table, though. If you hash your symbol table,
which everyone does, acess time is O(1). -John]



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.