|sincos CSE (was: quot/mod CSE...) email@example.com (1996-02-09)|
|Re: sincos CSE (was: quot/mod CSE...) firstname.lastname@example.org (1996-02-09)|
|Re: sincos CSE (was: quot/mod CSE...) email@example.com (1996-02-13)|
|Re: sincos CSE (was: quot/mod CSE...) firstname.lastname@example.org (1996-02-14)|
|Re: sincos CSE (was: quot/mod CSE...) email@example.com (1996-02-16)|
|From:||firstname.lastname@example.org (Henry Baker)|
|Date:||14 Feb 1996 21:37:13 -0500|
email@example.com (Henry Baker) writes:
> I can understand how the above optimizations work, but I now don't
> understand how a sin(x) or cos(x) _by itself_ gets optimized. Perhaps
> the tree gets decorated with 'reference count' information saying how
> many nodes depend upon the sincos(x) node, so that if this refcount =
> 1, then the expression 'sinpart(sincos(x))' => 'sin(x)', as before ??
> But the optimizer is going to somehow mark expressions that it makes
> into CSEs, so that the code generator knows not to recompute it,
> right? So the code generator can conclude that unmarked expressions
> are not CSEs, so it doesn't need to compute both parts and can
> transform 'sinpart(sincos(x))' into just 'sin(x)'.
I guess I wasn't explicit enough about the rules. I was assuming that
1. The compiler does _not_ have access to the actual source code for
sincos(), sin(), cos().
2. The compiler knows that 'sinpart(sincos(x))' and 'sin(x)' are
'equivalent', in that they produce the same numerical answer.
3. The compiler has a crude performance model, in which in knows that
C(sin)+C(cos) > C(sincos) > C(sin), where C() means 'complexity of'.
4. We'll assume that 'sinpart()' is a trivial field extraction from a
record, or something else relatively simple, so that the compiler knows
very well how to handle it.
Clearly, if the compiler can actually 'see' the code for sincos and sin,
then it can estimate performance and/or do CSE on both expressions and
compute its own performance complexity model.
Return to the
Search the comp.compilers archives again.