Tue, 8 Feb 1994 05:45:43 GMT

Related articles |
---|

Frequency of integer division in source code? Peter-Lawrence.Montgomery@cwi.nl (1994-02-07) |

Re: Frequency of integer division in source code? hbaker@netcom.com (1994-02-08) |

Re: Frequency of integer division in source code? meissner@osf.org (1994-02-09) |

Re: Frequency of integer division in source code? tfj@apusapus.demon.co.uk (1994-02-10) |

Re: Frequency of integer division in source code? glew@ichips.intel.com (1994-02-11) |

Re: Frequency of integer division in source code? bazyar@netcom.com (1994-02-10) |

Re: Frequency of integer division in source code? cliffc@dawn.cs.rice.edu (1994-02-14) |

Re: Frequency of integer division in source code? chase@Think.COM (1994-02-15) |

[1 later articles] |

Newsgroups: | comp.compilers |

From: | hbaker@netcom.com (Henry G. Baker) |

Keywords: | arithmetic |

Organization: | Compilers Central |

References: | 94-02-040 |

Date: | Tue, 8 Feb 1994 05:45:43 GMT |

*> Torbjorn Granlund and I will present*

*> ``Division by Invariant Integers using Multiplication'' at PLDI '94.*

*> We give rules on how to compile expressions such as x/1994,*

*> where x may be unsigned or signed and (in the signed case)*

*> rounding may be towards zero or towards -infinity.*

There's also a good CACM paper from sometime in the 1970's about this

subject. Hundreds of IEEE papers about HW which replaces division with

multiplication.

You might check out my paper "Computing A*B (mod N) Efficiently in ANSI

C", ACM Sigplan Notices 27,1 (Jan. 1992), 95-98. By replacing division

with multiplication, I was able to compute (A*B)%N faster on the 80860

than its C compiler could compute A%N.

Also, I remember a good paper by someone at HP re the early Precision(?)

architecture which didn't (originally?) have hardware multiply or divide.

*> One reviewer asks ``How common is division''?*

*> Does anyone have recent statistics (static or dynamic)*

*> about the frequency of integer division (quotient and/or remainder)?*

*> How often is the divisor a constant? A power of 2?*

Someone from IBM once told me that the 360 model 30 spent a substantial

fraction of all of its cycles in the division microcode. Apparently,

someone assumed that an infinitesimal dynamic frequency meant an

infinitesimal fraction of computing cycles. They forgot to multiply the

probability by the cost (2 MILLIseconds) to get the expected cost. BTW, I

believe that most of these cycles were spent in converting binary to

decimal.

(This machine was microprogrammed using punch cards! The cards had

special little silvered areas which became capacitors when the card was

placed into the microcode "ROM". Punching the card removed the capacitor!

See the 1960's CACM article on the Euler programming language which was

microprogrammed on this machine.)

I would guess that modern division is utilized primarily for base

conversion. Some symbolic algebraic systems (Macsyma, Reduce, Maple,

Mathematica, ...) do a lot of computing in rings and fields of integers

modulo (usuall) primes. In those programs, the number of divisions can be

quite large. Modern encryption algorithms--e.g., RSA--utilize modulo

computations extensively. I haven't looked at the code, but division may

well be a bottleneck in the speeds of encryption/decryption for these

codes. If you make division too fast, the NSA will call your compiler a

"weapon of war" and slap an export license on it.

If you do a lot of multiple-precision arithmetic, you may be better off

doing lots of mod p calculations (with no overflow) instead, and then put

the result together using the Chinese Remainder Theorem. (If you didn't

understand the previous sentence, you should ask the institution that

granted you your CS degree to refund your money.)

Many base conversion algorithms are written for an arbitrary base, but

would be much better off if the code were specialized for base ten. By

the way, if you haven't seen Henry Massalin's great paper (PLDI???) on

computing the smallest assembly language sequence for a particular

function, you should. His program found the shortest (and probably the

fastest) scheme for converting decimal-to-binary on the 68000. Hint: it

only works on big-endian machines.

--

Henry Baker

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.