Re: Justifying Optimization

cgweav@aol.com (Clayton Weaver)
11 Feb 2003 01:59:41 -0500

          From comp.compilers

Related articles
[18 earlier articles]
Re: Justifying Optimization vbdis@aol.com (2003-02-06)
Re: Justifying Optimization joachim_d@gmx.de (Joachim Durchholz) (2003-02-11)
Re: Justifying Optimization joachim_d@gmx.de (Joachim Durchholz) (2003-02-11)
Re: Justifying Optimization joachim_d@gmx.de (Joachim Durchholz) (2003-02-11)
Re: Justifying Optimization bhurt@spnz.org (2003-02-11)
Re: Justifying Optimization nde_plume@ziplip.com (2003-02-11)
Re: Justifying Optimization cgweav@aol.com (2003-02-11)
Re: Justifying Optimization anton@mips.complang.tuwien.ac.at (2003-02-21)
| List of all articles for this month |

From: cgweav@aol.com (Clayton Weaver)
Newsgroups: comp.compilers
Date: 11 Feb 2003 01:59:41 -0500
Organization: AOL http://www.aol.com
References: 03-02-023
Keywords: practice
Posted-Date: 11 Feb 2003 01:59:40 EST

My point in re: the slowness of a software upgrade being negative
marketing was not merely that "slow code is bad marketing".


If you have two programs that do the same job and are both stable,
then the faster one is a better tool. If one is more stable than the
other, it doesn't matter which one is faster, the less stable one is
worth nothing.


But if you don't know whether the slower one is stable or not, having
it be slower is not reassuring. (If they couldn't optimize it, why
not? Either their code is weak or their compiler is weak. Or their
algorithms are just slower.)


"New features" just don't enter into the decision unless they come as
an add-on to more stable code with less bugs. The software already had
the minimum set of required features or it would not have lasted the
first week, and fixing any annoying bugs in an otherwise stable
version is more interesting to the user than whether a new version can
do new things.


The compiler should be able to optimize to bit identical results
(modulo fp fuzz intrinsic to the hardware). If the code is correct, a
correct compiler should have no problem optimizing it to do faster
exactly what the unoptimized code does. The only time you *do not*
optimize (imho) is when the code clearly has simple algorithmic errors
(bugs in the most prosaic sense) and you want to run through your code
in debug mode to see where they are without the optimizer blurring the
picture of what the code is doing.


Regression testing optimized code is a step up the ladder from this,
testing a particular compiler on the code that you actually intend to
ship, and offers a consequently higher level of average reliability.


If there aren't any industrial strength, mature compilers for a
particular target platform, then you don't really have one of the
prerequisites for deciding this question in the average case.


(Just because I dislike upgrades that get slower with every version
doesn't mean that I oppose lisp, eiffel, ada, etc, where any apparent
slowness compared to C or Fortran is a result of the compiled code
doing more work at runtime that precludes certain types of errors that
are the bane of less semantically strict language definitions. I just
don't see any good reason why C and C++ stuff on a pIII with acres of
ram should be as slow as common lisp running on a 486 with 16mb of ram
and a couple hundred mb of swap.)


Regards,


Clayton Weaver
<mailto: cgweav@aol.com>


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.