|optimizing firstname.lastname@example.org (glen herrmannsfeldt) (2011-08-12)|
|Re: optimizing email@example.com (2011-08-13)|
|Re: optimizing firstname.lastname@example.org (Hans Aberg) (2011-08-13)|
|Re: optimizing email@example.com (Walter Banks) (2011-08-14)|
|Re: optimizing firstname.lastname@example.org (2011-08-15)|
|Re: optimizing email@example.com (Volker Birk) (2011-08-15)|
|Re: optimizing firstname.lastname@example.org (Hans Aberg) (2011-08-15)|
|Re: optimizing email@example.com (glen herrmannsfeldt) (2011-08-15)|
|From:||Walter Banks <firstname.lastname@example.org>|
|Date:||Sun, 14 Aug 2011 06:29:32 -0400|
|Organization:||Aioe.org NNTP Server|
|Posted-Date:||14 Aug 2011 20:31:00 EDT|
glen herrmannsfeldt wrote:
> It seems to me, though, that in the case of RISC, and even more in the
> case of VLIW processors like Itanium, delaying the final optimization
> and code generation pass would be useful. ...
> [This is pretty standard in the toolchains for embedded processors. I
> gather that the ARM compilers generate intermediate code, and all the
> optimization and code generation happens in the linker. -John]
To add to John's comment. We have been writing and selling compilers
for embedded systems for a long time. Since the early 90's we have
been doing our code generation at link time.
Embedded systems are unique to make this attractive. The application
code is almost never hosted, fast small code is highly desired and
they are compile once run often systems.
Link time code generation offers many optimization possibilities. The
biggest change is the mindset change to full application optimization.
There are some downsides. Reused code and libraries have the potential
to behave differently within some applications because each link could
generate different sequences for the same source module.
Byte Craft Limited
Return to the
Search the comp.compilers archives again.