Related articles |
---|
Order of argument evaluation in C++, etc. hbaker@netcom.com (1995-07-08) |
Re: Order of argument evaluation in C++, etc. wclodius@lanl.gov (WIlliam B. Clodius) (1995-07-10) |
Re: Order of argument evaluation in C++, etc. chase@centerline.com (1995-07-12) |
Re: Order of argument evaluation in C++, etc. hbaker@netcom.com (1995-07-18) |
Re: Order of argument evaluation in C++, etc. stefan.monnier@epfl.ch (Stefan Monnier) (1995-07-20) |
Re: Order of argument evaluation in C++, etc. dmk@dmk.com (1995-07-21) |
Re: Order of argument evaluation in C++, etc. jhallen@world.std.com (1995-07-21) |
Re: Order of argument evaluation in C++, etc. hbaker@netcom.com (1995-07-26) |
Re: Order of argument evaluation in C++, etc. karlcz@moraine.hip.berkeley.edu (1995-07-26) |
[40 later articles] |
Newsgroups: | comp.compilers |
From: | chase@centerline.com (David Chase) |
Keywords: | C++, parallel, optimize |
Organization: | CenterLine Software |
References: | 95-07-068 95-07-069 |
Date: | Wed, 12 Jul 1995 18:29:07 GMT |
Having worked with C++ since 1990, and having been surprised multiple
times by that twisted language, I'd cheerfully entertain a "results
as if evaluated in this well-defined order" rule, along with a
well-defined set of rules for temporary creation (or not).
Why?
1. "as if" means that the compiler gets to reorder evalation where
it doesn't affect the I/O behavior of the program, so we keep
the safe optimizations.
2. in the compilers that I've worked on, heard about, read about,
or talked about, the front-end picked an evaluation order, and
the optimizer followed rule 1 above. In particular, any of the
side effects being discussed were (at the optimizer's level)
blatantly obvious, and complete cause for a hands-off approach
to the evaluation order supplied by the front-end.
3. I don't think the flexibility provided in the C/C++ "undefined
order of evaluation" rules amounts to any sort of speed improvement
anyone cares about and (see above) nobody takes advantage of it,
anyhow.
4. Then, programs would be more portable. They'd be more reliable,
too. In the past, I've been nervous about switching compilers
late in product development, because of possible *bugs* in the
compiler, but experiments have shown that once the new compiler
has passed severe testing, the risk is pretty low. With C++,
one must beware of differences in behavior due to
standards-conforming (hence, unflagged by any test suite)
changes in temporary optimization.
5. I have actually encountered a single exception to #2 -- there is
one compiler which I have used, which (as near as I can tell)
did exploit this freedom. Said compiler was a buggy piece of junk,
that generated substantially worse code than others I had access to,
and took longer to do it. This compiler will remain nameless,
both to protect the guilty, and to protect me from being sued
for libel.
(I originally used a more colorful word than "junk", but I
wouldn't want our moderator to have to worry about community
standards.)
Perhaps there are compilers that do exploit the freedom to reorder
evaluation of side-effect-containing expressions (in addition to the
"pure" expressions), and that do generate good code, and that aren't
buggy pieces of junk. I've never heard of one, nor have I ever heard of
any study showing any speed improvement obtained by such reordering (in
practice, you could use Sethi-Ullman numbering to use slightly fewer
registers for side-effecting-expression evaluation, but in the context
I've described, I don't think the gain would be noticeable).
So, has anyone EVER done any measurements to justify these
weakly-defined language semantics? What's the typical improvement in
speed? Again, I'm only talking about cases where the expressions have
potential side-effects, as in "e() + f() + g() + h()", and not "w + x +
y + z". Reordering in the second case should still be legal.
Now, in Fortran, relaxing the order of evalation of floating point
expressions enables substantial speedups due to loop-reorganizing
optimizations. In this case, the bang (speedup) is substantial (integer
factor speedups) and the changes to program semantics (reassociation of
floating point arithmetic) are relatively well-understood. The
situation for C++ is completely opposite -- the changes in behavior are
NOT well understood (essentially, user-configurable, depending upon what
constructors and destructors do) and the speedup is negligible, and
the rearrangement being discussed is at the expression (and parameter
list) level, not at a loop level.
You would think, given recent moves to introduce somewhat more order
and process to software production (e.g., ISO 9000) that people would
care more about the specification and quality of the tools that they
use, but perhaps people haven't made that connection yet.
speaking for myself,
David Chase
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.