Re: deadcode optimization

marcov@toad.stack.nl (Marco van de Voort)
12 Apr 2001 02:41:30 -0400

          From comp.compilers

Related articles
[12 earlier articles]
Re: deadcode optimization fjh@cs.mu.OZ.AU (2001-03-12)
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-03-14)
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-03-14)
Re: deadcode optimization broeker@physik.rwth-aachen.de (Hans-Bernhard Broeker) (2001-03-22)
Re: deadcode optimization marcov@stack.nl (Marco van de Voort) (2001-04-04)
Re: deadcode optimization stonybrk@ix.netcom.com (Norman Black) (2001-04-10)
Re: deadcode optimization marcov@toad.stack.nl (2001-04-12)
| List of all articles for this month |

From: marcov@toad.stack.nl (Marco van de Voort)
Newsgroups: comp.compilers
Date: 12 Apr 2001 02:41:30 -0400
Organization: Eindhoven University of Technology, The Netherlands
References: 01-03-012 01-03-022 01-03-034 01-03-060 01-03-075 01-03-091 01-03-099 01-04-020 01-04-048
Keywords: linker, optimize
Posted-Date: 12 Apr 2001 02:41:30 EDT

Norman Black wrote:
>> For deadcode removal, the linker is indeed the problem. One can solve this
>> by creating an object file per symbol. This is horribly slow.
>
> Maybe for some linkers. Certainly not our linker, or some of the old
> superfast DOS linkers like optlink and blink.


I mainly meant GNU LD. I know there are faster ones. But I don't know
if that that is generic speed, or better support (and optimization)
for deadcode opt. (like you say below, archive type etc)


> Turbo Pascal was extremely fast, but they used a proprietary format (TPU)
> which may, or may not, have helped in this regard.


I think it helped, and moreover also helpes to conserve memory.


> Linking archives is slower than straight objects. For "Unix" systems
> the Unix archive format certainly does not help as it is something of
> a joke with regards to the archive symbol table.


I think this slows down, and costs mem, but isn't the main factor. The
difference is too big. I simply think that LD was only designed to do this
for large chunks.


> In the Win32 world Microsoft solved this with an extended archive symbol
> table format, which we use. Our compiler and librarian program (like ar)
> can output these extended archives on COFF and ELF archives. Our linker
> can still deal with the absence of the extended archive symbol table, but
> that adds an amount of compute time to sort the standard Unix archive
> symbol table for fast searches. Just why it was not mandated that the
> archive symbol table be sorted by symbol name is beyond me. Yes, the
> linker can do this, but the archive is created once, and a linker uses the
> archive multiple times. Doing something once is always better.


How compatible are those extensions? (I ask because we are studying on an
own linker too)


OTOH, we could store that table in .ppu, and keep the .a's compatible.
If we require two files per (non-null) unit, we might as well make use of it
:-)


>>The slowlyness can be reduced a magnitude of 2 by elimintating the backend
>> assembler and directly pooping out .a's with all objects
>>
>> If you want an example for this, check the Free Pascal compiler
>> (www.freepascal.org)
>
> Our compiler(s) go from source file input to object file output with
> nothing in between but data in memory.


Ours too, but only on Go32v2, Windows and the Linux/FreeBSD platform.
(So not on OS/2 Solaris/i386 and the m68k platforms). And you can still use
AS if desired, or output MASM or TASM.


The binary writer solved most of the pain of the deadcode optimization
(generating tens of thousands of assembler files and then assemble and
archive was too much), but the entire smartlinking system is still
relatively slow.


(compared to my reference point, which is TopSpeed, because I used their M2
compiler before FPC)


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.