Newsgroups: | comp.compilers |
From: | graham.matthews@pell.anu.edu.au |
Keywords: | design, optimize, parallel |
Organization: | Australian National University |
References: | 95-07-068 95-08-093 |
Date: | Wed, 16 Aug 1995 00:18:26 GMT |
graham.matthews@pell.anu.edu.au writes:
>> Time to add a note of "futureness" to the debate. All this argument
>> about order of evaluation, the disadvantages, advantage etc is all
>> very uniprocessor based.
chase@centerline.com (David Chase) writes:
>There's a few problems with your reasoning, as I see it.
>First of all, I think it will be very hard for C++ programmers
>to move to parallel execution.
Apologies if I missed the boat but I was not commenting on C++
directly, rather in the ideas of side effect expressions etc.
My comments do apply to C++ though (see below).
>Second, (once again) where's your measurements? How much potential
>parallelism are we giving up here, relative to what we've got or
>will get? How much current unreliability should we trade off
>against some mythical future parallel performance?
You are potentially giving up a huge amount of parallelism. Take the
call,
x = f(g(...), h(...))
Since functions are expressions and can have side effects g() and h()
cannot be executed in parallel (unless you can statically determine
which source code g and h refer to and analyse that. If you want to
make life harder do something similar to the above with virtual
methods.
graham
--
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.