|Modulo optimizations firstname.lastname@example.org (Graham Marshall) (1999-10-04)|
|Re: Modulo optimizations email@example.com (Joe Keane) (1999-10-11)|
|Re: Modulo optimizations firstname.lastname@example.org (1999-10-13)|
|Re: Modulo optimizations email@example.com (David Chase) (1999-10-13)|
|Re: Modulo optimizations firstname.lastname@example.org (George Russell) (1999-10-14)|
|Re: Modulo optimizations email@example.com (Robert Harley) (1999-10-14)|
|Re:Modulo optimizations Wilco.Dijkstra@arm.com (Wilco Dijkstra) (1999-10-19)|
|[2 later articles]|
|From:||"Graham Marshall" <firstname.lastname@example.org>|
|Date:||4 Oct 1999 12:10:06 -0400|
I'd like to have some input on what a compiler optimizer is likely to do in
the following scenario.
I have a computation in my code which does a modulo divide:
x = y % z;
If I understand correctly, if z is a power of 2, the optimizer can
emit faster code (perhaps using shifts etc.) to take advantage of
this. However, what happens if the value of z is not known at
compile-time, but only at run-time. Will the optimizer:
a) Only emit the general modulo case code, since the value of z is not
known at compile-time, or
b) Emit code for the general case AND the faster case and evaluate z at
run-time and select the appropriate
routine to execute?
My gut feeling (as a naive user) is that a modern compiler would do
b), and that this may be one of the reasons why faster executables can
often be greater in size when full optimization is switched on.
I'd be grateful for any input on this - thanks in advance,
[These days arithmetic is fast, conditional jumps are slow, so it's hard
to imagine that anything other than a single divide instruction would be
faster for unknown z. Executables get big because of loop unrolling and
in-line expansion of routines. -John]
Return to the
Search the comp.compilers archives again.