Re: Justifying Optimization (John Dallman)
21 Jan 2003 00:15:15 -0500

          From comp.compilers

Related articles
Justifying Optimization (MICHAEL DILLON) (2003-01-17)
Re: Justifying Optimization (Joachim Durchholz) (2003-01-20)
Re: Justifying Optimization (srikanth) (2003-01-21)
Re: Justifying Optimization (Christian Bau) (2003-01-21)
Re: Justifying Optimization (2003-01-21)
Re: Justifying Optimization (2003-01-21)
Re: Justifying Optimization (Sid TOUATI) (2003-01-25)
Re: Justifying Optimization (Conor O'Neill) (2003-01-25)
Re: Justifying Optimization (Jan C. =?iso-8859-1?Q?Vorbr=FCggen?=) (2003-01-25)
Re: Justifying Optimization (Jan C.=?iso-8859-1?Q?Vorbr=FCggen?=) (2003-01-25)
Re: Justifying Optimization (Jan C.=?iso-8859-1?Q?Vorbr=FCggen?=) (2003-01-25)
Re: Justifying Optimization (Joachim Durchholz) (2003-01-26)
[14 later articles]
| List of all articles for this month |

From: (John Dallman)
Newsgroups: comp.compilers
Date: 21 Jan 2003 00:15:15 -0500
Organization: Nextra UK
References: 03-01-088
Keywords: optimize
Posted-Date: 21 Jan 2003 00:15:15 EST (MICHAEL DILLON) wrote:

> I've been developing for twenty years, and ever since I've been
> allowed to have an opinion I've insisted that code ready for final
> testing and deployment be optimized. I'm currently responsible for my
> program's development strategy, and was recently blindsided by
> resistance to this approach. Developers are stating that optimized
> code produces errors and makes debugging more difficult.

Optimisers sometimes introduce errors, via bugs in them, but they also
show up errors that haven't previously been noticed. If the final product
is to be optimised, then it's absolutely necessary to do the serious
testing on an optimised build. Trusting that an optimiser will never
expose any correctness issues is just plain foolish.

Initial development and the debugging that comes with that can be done on
less- or non-optimised builds, but once you've integrated and are testing,
you have to use something identical to the build that will be run in

Overall, I suspect your programmers are being lazy (in the non-virtuous

> While true that debugging is made more complicated when optimization
> is used, I'm not considering that enough justification to avoid
> optimization.

Dead right.

> The other argument, that optimization produces errors, is the one that
> is new to me. While I've not had any personal indication of this, I
> don't have any hard facts.

They're right, occasionally. Showing up their errors is at least equally
common, and much more humiliating for them.

> Is there any substantiated data that says optimized code is more prone
> to errors? Is there a generally accepted guideline in the community
> that says when you should/should-not optimize?

See above for both of those.

> Is there a general level of compiler technology so that I can say that
> I'd gain ~x% by optimizing?

It depends, a lot, on the architecture you're targeting, the style of the
software being written, and so on. That said, on any modern general-
purpose architecture I'd expect to double throughput with a decent
optimiser, and often do better. On one occasion, with Forte 6.2 on 32-bit
Solaris 7, I picked up 30% more throughput just by allowing Forte to use
UltraSPARC instructions.

Your address prompts an extra thought. If Lockheed Martin programmers cut
their teeth on embedded military systems, they may have experienced some
less-than brilliant optimisers, possibly on architectures where processor
speeds were so low that an optimiser couldn't save much time, by, say
reducing the number of memory accesses, because memory was as fast as the
processor. This definitely isn't the case on the systems you mention.

John Dallman

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.