Re: Order of argument evaluation in C++, etc.

chase@centerline.com (David Chase)
Fri, 11 Aug 1995 19:03:23 GMT

          From comp.compilers

Related articles
[16 earlier articles]
Re: Order of argument evaluation in C++, etc. graham.matthews@pell.anu.edu.au (1995-08-08)
Re: Order of argument evaluation in C++, etc. det@sw.stratus.com (David Toland) (1995-08-08)
Re: Order of argument evaluation in C++, etc. jthill@netcom.com (1995-08-10)
Re: Order of argument evaluation in C++, etc. chase@centerline.com (1995-08-11)
Re: Order of argument evaluation in C++, etc. mfinney@inmind.com (1995-08-10)
Re: Order of argument evaluation in C++, etc. hbaker@netcom.com (1995-08-10)
Re: Order of argument evaluation in C++, etc. chase@centerline.com (1995-08-11)
Re: Order of argument evaluation in C++, etc. eggert@twinsun.com (1995-08-13)
Re: Order of argument evaluation in C++, etc. rfg@rahul.net (Ronald F. Guilmette) (1995-08-14)
Re: Order of argument evaluation in C++, etc. graham.matthews@pell.anu.edu.au (1995-08-16)
Re: Order of argument evaluation in C++, etc. bobduff@world.std.com (1995-08-16)
Re: Order of argument evaluation in C++, etc. sethml@sloth.ugcs.caltech.edu (1995-08-16)
Re: Order of argument evaluation in C++, etc. ok@cs.rmit.edu.au (1995-08-16)
[20 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: chase@centerline.com (David Chase)
Keywords: design, optimize, parallel
Organization: CenterLine Software
References: 95-07-068 95-08-071
Date: Fri, 11 Aug 1995 19:03:23 GMT

graham.matthews@pell.anu.edu.au writes:


> Time to add a note of "futureness" to the debate. All this argument
> about order of evaluation, the disadvantages, advantage etc is all
> very uniprocessor based. As soon was we start writing large scale
> software for parallel machines all these little C quirks will very
> quickly become impediments both to writing robust code, and to
> optimisation. On a parallel processor side effects in expressions
> are optimisation impediments since they force an order other than
> the efficiency dataflow order. Likewise side effects in expressions
> can create non-deterministic (and therefore non debuggable) code,
> if you don't specify a precise order of evaluation (and hence
> again forgoe the optimal dataflow order).


There's a few problems with your reasoning, as I see it.


First of all, I think it will be very hard for C++ programmers
to move to parallel execution. If nothing else, the "C++ gurus"
are apparently recommending things like user-written memory
allocators, and reference counting(*). As far as I can tell, none
of the popular libraries are being written with an eye towards
thread-safety, so it's likely that people will be forced to
build their code from scratch. With or without a specified
order of evaluation, code naively ported from uniprocessors will
be buggy from the start, and a nightmare to debug.


Furthermore, it also isn't clear to me that the standard actually
allows true parallel execution. Consider structure assignment --
is it legal for a program to be able to observe a structure in a
half-assigned state, if the programmer didn't explicitly request
parallel execution? There's a big difference between undefined
order of evalatuation, and totally undefined results.


Or, in short, I just don't expect to see the use of fine-grained
parallelism in very many C++ programs, so I don't think we lose
any future performance by inhibiting the use of fine-grained
parallelism.


(*) Just in case it isn't obvious, the problem with reference
counting on a multiprocessor is that the updates to the reference
counts are not atomic, at least if they are compiled from C. "rc++"
translates to "t = rc; t = t+1; rc = t;" and on a multiprocessor two
simultaneous updates to the same reference count may be interleaved
in a variety of unfortunate ways. You can add locking, of course,
but this only has the effect of making the slowest-known garbage
detection technique even slower.


Second, (once again) where's your measurements? How much potential
parallelism are we giving up here, relative to what we've got or
will get? How much current unreliability should we trade off
against some mythical future parallel performance?


Third -- anyone who really cares about performance (now or in the
future) uses Fortran :-).


Why am I being such a weenie for measurements? Consider that I've
been on the other end of conversations that go like this:


    "We'd be more productive if we used a garbage collector."
    "How much more productive?"
    "I'm not sure. Rovner said 40%, but that's just an estimate."


Productivity is difficult to measure, but performance is not, and
if people insist on cutting corners "for performance", I feel
justified in insisting that they demonstrate an actual performance
increase.


speaking for myself,


David Chase
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.