|Q: division vs multiplication email@example.com (1995-03-24)|
|Re: Q: division vs multiplication firstname.lastname@example.org (1995-04-02)|
|Re: Q: division vs multiplication Terje.Mathisen@hda.hydro.com (1995-04-02)|
|Re: Q: division vs multiplication email@example.com (1995-04-02)|
|Re: Q: division vs multiplication firstname.lastname@example.org (1995-04-03)|
|Re: Q: division vs multiplication davidm@Rational.COM (1995-04-03)|
|Re: Q: division vs multiplication email@example.com (1995-04-04)|
|Re: Q: division vs multiplication Terje.Mathisen@hda.hydro.com (1995-04-06)|
|Re: Q: division vs multiplication firstname.lastname@example.org (Mike Meissner) (1995-04-16)|
|[10 later articles]|
|From:||Terje.Mathisen@hda.hydro.com (Terje Mathisen)|
|Organization:||Hydro Data, Norsk Hydro (Norway)|
|Date:||Sun, 2 Apr 1995 09:47:13 GMT|
email@example.com (Mr Tomas Hulek) writes:
[Can one optimize floating division by powers of two into something like a
>I would imagine that division by 2.0 could be done very efficiently, just like
>division by 10 in our decimal system. But is it really so?
The operative word here is _could_, i.e. it could be done efficiently, but in
reality these kinds of divisions occur so rarely, that it would cost more
to detect them (in hardware), than what the average speedup would be.
A classical example from the Intel architecture is the FSCALE operation,
which is documented as "a fast way to multiply or divide by a power of two",
implemented as a direct manipulation of the exponent.
The problem is that after Intel got decent FMUL speed (1 to 3/4 clocks on
a Pentium), FSCALE is still microcoded, and takes 3 to 10 times longer!
I'd bet this is the way most (all?) architectures are moving, so you would
be better off by trying to stay with a simple multiplication as much as
-Terje Mathisen (include std disclaimer) <Terje.Mathisen@hda.hydro.com>
Return to the
Search the comp.compilers archives again.