|[31 earlier articles]|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-21)|
|Re: Order of argument evaluation in C++, etc. email@example.com (1995-08-21)|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-21)|
|Re: Order of argument evaluation in C++, etc. email@example.com (1995-08-21)|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-21)|
|Re: Order of argument evaluation in C++, etc. email@example.com (1995-08-21)|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-23)|
|Re: Order of argument evaluation in C++, etc. email@example.com (1995-08-24)|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-23)|
|Re: Order of argument evaluation in C++, etc. email@example.com (Thomas Way) (1995-08-23)|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-24)|
|Re: Order of argument evaluation in C++, etc. email@example.com (1995-08-24)|
|Re: Order of argument evaluation in C++, etc. firstname.lastname@example.org (1995-08-25)|
|[5 later articles]|
|From:||email@example.com (Craig Burley)|
|Keywords:||C, Fortran, optimize, design|
|Organization:||Free Software Foundation 545 Tech Square Cambridge, MA 02139|
|Date:||Wed, 23 Aug 1995 11:52:35 GMT|
firstname.lastname@example.org (David Chase) writes:
of people producing applications for which the most important things
are reliability and time-to-market -- they ship debuggable binaries,
giving up a factor of 2 in performance (at least). You'd think that
this sort of thing was REALLY IMPORTANT, much more than some
apparently insignificant, sometimes-used, never-measured option to
Having watched this discussion with great interest, and finding
it not easy to agree wholeheartedly with one view or another for
any real length of time, I think the above pretty much convinces me.
Let's face it, C is not for generating high-performance numerical
code from algorithms expressed with any reasonable degree of generalized
abstraction. It's for generating moderately optimized code, typically
at low levels, from low-level implementations of algorithms.
Most of C's user applications probably are more like the examples David
Chase suggests above -- shrink-wrapped apps. They're probably just
going to run some relatively slow UI, some moderate number-crunching
code, and mostly talk to the operating system.
The people who develop high-performance code in C have a general
understanding of what their job is. It goes beyond just picking the
fastest algorithm -- it requires not just tweaking, but sometimes
massively massaging the code they originally choose to express it,
so as to reorder operations, repartition procedures, and so on
to convince the compiler they're using to generate the fastest code.
It probably requires using assembler.
This is usually acceptable because people who write serious apps in
C tend to have a pretty decent understanding of how compilers,
especially C compilers, work internally.
For the audience that has an ongoing, significant need to generate
high-performance numerical apps that might well run on only one
(very expensive) machine a handful of times, Fortran is the classic
solution. With Fortran you give up the easily expressed low-level
source control of scheduling (though you have that expression
available using "clunkier" constructs, in most cases), so you
can typically express your well-chosen algorithm directly in the
Fortran code and reasonably expect high-performance compilers to
generate high-performance executable(s) on the appropriate machine(s).
This is appropriate for Fortran, because that audience, while incredibly
skilled at picking algorithms and quite skilled at expressing them
directly in Fortran, doesn't seem to have the same level of general
understanding of compiler internals -- nor should they, since, after
all, compilers are supposed to relieve the user of having to understand
them (when the language design cooperates with this goal, anyway) so
the user can spend time and energy understanding their problem domain
For example, C has wonderfully expressive operators like &&, &, ||, and |.
Fortran doesn't have them, but can express each of them using somewhat
clunkier syntax. On the other hand, C does not have Fortran's .AND. and
.OR., which is all Fortran offers in terms of "convenient" in-line
operators of that ilk. You simply cannot express these operators in C.
Having raised this issue before, when some C programmers "invaded"
comp.lang.fortran and claimed Fortran would be a better language
if it just redefined .AND. to be C's &&, .OR. to be C's ||, and so on,
I made quite a discovery in the process of watching the responses.
This discovery: just as Fortran programmers tend to not be compiler
experts, C programmers tend to have little expertise in issues such
as language design and expressiveness, or even the goals of
computing in general.
In this discussion on c.l.f, it was amazing how difficult it was for
some people (the C crowd) to understand that there was such a thing as
a more general, and _thereby_ more useful, definition of "and" than
the C && and &. I had to post human examples, and even then some
people seemed to have trouble with it.
To a typical C programmer, apparently, if you say "that's true if
there's a body buried in that grave and it's Tuesday", you are either
saying "to determine if that's true, _first_ determine if there's
a body buried in that grave; _then_ if that's false, the result is
false, else determine if it's Tuesday, and use the result of that
determination", or saying "determine if there's a body buried in
that grave, determine if it's Tuesday, then AND the results together".
(These are the && and & operators, respectively.)
To a Fortran programmer, you aren't saying much more than what most
non-C-minded human beings would hear in the statement. Most of
them would realize it was probably most efficient to first test
the right-hand operator ("if it's Tuesday"), because if that
condition is false, they've just saved themselves the trouble of
digging up the grave. And in fact most of them would realize they
could send two people to do the job, if both were going to take
awhile, wait for at least one to come back, and act accordingly
To a C programmer, the bug is in the person who made the original
statement. They should have ordered it properly, by reversing
the order of the operands. Further, they're unconvinced that there
are any "real" examples of the person making the expression _not_
having a good idea of which operand takes more resources to
evaluate and thus not being able to order them properly. To
express parallelism, the person who made the statement should
have started with a "fork", etc. etc.
I've perhaps been overstating the case above, but not overstating
the clear level of confusion (or C-narrow-mindedness) on the part of
some of the people participating in the debate. Some of them
could not conceive of the notion that Fortran's notion of .AND.
was _superior_ to C's by its very virtue of being more general
(less restrictive in terms of the implementation it mandated).
This raises an important point of this post: while the overall
goals of computing includes achieving more efficient use of
resources for appropriate tasks, and that includes allowing
humans (who represent significant resources) to spend less
time focusing on implementation and thus more time on design,
languages (especially older ones like C and Fortran) represent
"plateaus" where various groups of us (with quite a bit of
overlap) can stand, or even barely hang from via our fingertips,
and remain there for some time, getting real work done even
though some aspects of the industry have moved on. These
plateaus represent mind-sets, years and years of experience,
expectations (including diminished ones ;-), and so on.
In particular, C programmers have generally always expected to control
things like order of execution in various places, and have a real
challenge understanding things like potential parallelism,
ESPECIALLY in low-level constructs such as the function call with
Given this, despite my general belief that mandating order of
evaluation is WRONG, I think it would be wisest for C to adopt
a strict left-to-right ordering in a future standard (using
the "as if" rule, of course, to allow really clever compilers
to do whatever they want when equivalent), with this ordering
involving sequence points, etc.
After all, with C you can assume the function call will actually
be made, and in that order, and that therefore the arguments will
be evaluated, so it stands to reason they'll be evaluated in
some defined order.
If you want undefined order evaluation, you probably should consider
Fortran or some other more-general language, where things like
X = FUNC(Y()) + FUNC(Z())
can be implemented as
X = -2.
if the compiler can discover that, in that context, FUNC() will always
return -1., _irrespective_ of _any_ side effects that FUNC(), Y(),
and Z() might perform. (In other words, don't depend on any side
effects when invoking _functions_ in Fortran. That's what subroutines
are for. After all, to a mathematician, what is a function evaluation
other than a way to easily express the calculation of some _result_,
with the mechanics of the actual calculation completely unimportant
to the equation expressing the function evaluation itself?)
Language "paradigms", or in this case "paradigmlets" I suppose, offer
us the ability to "think" in a general way when it comes to
spec'ing, designing, and implementing code. A language does its job
well when, during its evolution, it requires its audience to relearn
the fewest aspects of its general way of thinking.
Although C may have "officially" always left argument evaluation
undefined in various ways, practically speaking few C programmers
seem to understand that. And as more C compilers might try to take
advantage of it, more existing code is likely to start breaking in
ways it didn't before (despite being buggy all along). Fortran
programmers often find such buggy code (e.g. code that does
"FOO = RANDOM(SEED)" where RANDOM() is some user function that
does the obvious thing, and where a compiler might eliminate the
necessary side-effects), but they're more used to handling that
form of bugginess than C programmers. On the other hand, Fortran 90
adds things that I think the C and Pascal communities are much better
prepared to understand than the Fortran audience, and which I suspect
could have been better engineered for the Fortran audience, such
as pointers and (less suspect IMO) recursion (and worst of all,
though it affects _compiler_ speed mostly, the mandate that programmers
cannot specify nested procedure implementations before the code
in their outer procedures, which counters all Fortran/C experience
in favor of the Modula/Pascal crowd).
Might as well bite the bullet and make C the best language for doing
C programming that the universe will ever C. As has been said
many times before, "If you want Fortran, you know where to find it".
James Craig Burley, Software Craftsperson email@example.com
Return to the
Search the comp.compilers archives again.