From: | max@gac.edu (Max Hailperin) |
Newsgroups: | comp.compilers,comp.dsp |
Date: | 6 Mar 1996 21:08:14 -0500 |
Organization: | Gustavus Adolphus College, St. Peter, MN |
References: | 96-03-006 96-03-034 |
Keywords: | DSP, optimize |
torbenm@diku.dk (Torben AEgidius Mogensen) writes:
...
Part of the problem comes from using C/C++. ... C compilers
typically use a small number of different sizes of integers, e.g. 8,
16 and 32 bit. An assembler programmer might see that only 24 bits are
needed, which allows him (on an 8-bit machine) to do operations on
these numbers more efficiently than in C. This problem is further
aggravated by the fact that the size of short, int and long depends on
the compiler/processor. I much prefer the Pascal idea of explicitly
specifying minimum and maximum values....
An even bigger problem, in my experience is that C/C++ are built on
the strange notion that when you operate on two n-bit numbers, you get
an n-bit result, even when multiplying. This doesn't make a whole lot
of mathematical sense, and moreover the processor architects have in
my experience always gotten it right -- they have instructions for
multplying two 16-bit numbers and getting a 32-bit product, or two
32-bit numbers and getting a 64-bit product, or whatever. The C
compiler "hides" these from you, which can cause a major slowdown. In
multiple-precision multiplication, which is generally implemented
roughly as the normal grammar-school multiplication algorithm but with
each "digit" being n bits wide (i.e., radix 2^n), you typically wind
up with having to use half as large a value of n, hence twice as many
"digits" in each of the multiplier and multiplicand, hence a four-fold
slowdown (2 squared).
-Max Hailperin
Assistant Professor of Computer Science
Gustavus Adolphus College
800 W. College Ave.
St. Peter, MN 56082
USA
http://www.gac.edu/~max
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.