sincos CSE (was: quot/mod CSE...) (Henry Baker)
9 Feb 1996 12:07:57 -0500

          From comp.compilers

Related articles
sincos CSE (was: quot/mod CSE...) (1996-02-09)
Re: sincos CSE (was: quot/mod CSE...) (1996-02-09)
Re: sincos CSE (was: quot/mod CSE...) (1996-02-13)
Re: sincos CSE (was: quot/mod CSE...) (1996-02-14)
Re: sincos CSE (was: quot/mod CSE...) (1996-02-16)
| List of all articles for this month |

From: (Henry Baker)
Newsgroups: comp.compilers
Date: 9 Feb 1996 12:07:57 -0500
Organization: nil organization
Keywords: optimize, arithmetic

Ok. I understand that both quot/mod and sin/cos can be done by some form
of common-subexpression analysis on expressions after the input expressions
have been rewritten with rules of the following sort:

sin(x) => sinpart(sincos(x))

cos(x) => cospart(sincos(x))

mod(m,n) => modpart(quotmod(m,n))

quot(m,n) => quotpart(quotmod(m,n))

(These optimizations are supposedly helpful in certain standardized
benchmarks -- at least the sincos one is supposedly useful in

(BTW, if a compiler is capable of _generally_ performing optimizations
of this sort, then it should _say so_! Not only does this make good
copy for the marketing literature, it also keeps the users from
wasting their time making 'optimizations' that the compiler is fully
capable of doing on its own.)

I can understand how the above optimizations work, but I now don't
understand how a sin(x) or cos(x) _by itself_ gets optimized. Perhaps
the tree gets decorated with 'reference count' information saying how
many nodes depend upon the sincos(x) node, so that if this refcount =
1, then the expression 'sinpart(sincos(x))' => 'sin(x)', as before ??

The problem is that -- unlike the quotmod case, where presumably there
is no substantial additional cost in computing the mod when you only
require the quotient (or vice versa) -- in the case of sincos(x), the
amount of additional work to do cos(x) when you only require sin(x)
may be substantial. Or equivalently, if 'quotmod' is a routine that
operates on arbitrary precision integers, the cost of computing _both_
the explicit quotient and the explicit remainder may be substantially
more than one or the other by itself.

Since the sincos(x) trick is apparently used in Fortran compilers, and
Fortran compilers that I have looked at usually call only sin(x) when
sin(x) is called, rather than sincos(x), I can assume that these
compilers do make the further optimization to get rid of the sincos
when it is not actually needed ??

I can also understand why C systems would be loathe to perform this
optimization, since the calling of sincos(x) instead of sin(x) might
come as a shock to someone trying to debug a C program.


Next question. Suppose now that a compiler is capable of doing the
right thing for both cases -- i.e., when sin(x) is used by itself, or
when sin(x) and cos(x) are used together. What happens if the user
tries to 'optimize' the case sincos(x) himself? Has he actually made
things worse for himself by introducing a new temporary variable and
additional constraints, or is he no worse off than if it had simply
used sin(x) and cos(x) and let the compiler do the work ??

(I can conceive of the programmer actually making things worse for
himself if by adding additional variables and sequence points he
inhibits other optimizations.)

The issue I'm trying to get at is whether there exist 'pessimizing'
user-level optimizations, where a programmer can outsmart himself and
make things much worse. If there are, then it becomes even more
important to advertise specifically what kinds of things the compiler
already optimizes, so that the programmer is not tempted to second
guess the compiler.

Yes, we can find all these things out with exhausting tests of code
generator behavior, but we then have to perform these tests all over
again when the next release of the compiler comes out. This code
generator testing is a _very_ tedious business.

www/ftp directory:

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.