|compilers for DSP processors email@example.com (1992-12-21)|
|Re: compilers for DSP processors firstname.lastname@example.org (1992-12-22)|
|High Quality DSP Compilers email@example.com (1993-01-04)|
|From:||firstname.lastname@example.org (Jeff Enderwick)|
|Organization:||Motorola DSP Compilers|
|Date:||Tue, 22 Dec 1992 16:53:43 GMT|
> [dismal report on the state of DSP compilers.]
> - Do you agree on the above ? (the quality of DSP compilers)
Well, sort of. I would say that generalizing to a "performance hit factor"
should really be avoided. Sometimes compilers match hand code, but not
often. Sometimes they're worse than 5x. The trouble with DSP compilers
stems from language, architecture, and the nature of how DSPs are used.
DSPs are, in almost every case extremely cost sensitive. The huge majority
of DSPs sold use a fractional data type, because the dynamic range added
by floating point doesn't nearly offset the cost of storing exponents in
memory! With compiled code being slower than hand code, the cost of the
end product goes way up. Let's say, for example that I use a compiler when
designing a cellular phone. If the performance is 1.22x hand code, then I
would need to go from a 27MHz part to a 33MHz part, significantly raising
the cost of every phone that I produce. I guess my point is that 99.9% of
DSP applications are *so* cost sensitive that *no* performance hit is
Why is there a performance hit, as we all know that compiled code is
faster than hand code? Well, two reasons: popular programming languages
such as C don't always map well onto DSP architectures, and DSP
architectures aren't typically amenable to compiler optimization. DSP
architectures often have features whose use isn't generally expressible in
languages such as C. Their use requires insight into the algorithm being
performed. Language extensions are possible, but so far there hasn't been
any consensus, and all DSPs don't have the same bag of tricks. Remember:
if you aren't 100% as fast as hand code the compiler will not be used for
the time-critical sections.
DSP architects tend to use whatever tricks they can to get maximum
performance over a relatively small set of DSP algorithms, general
performance be damned. Instruction sets tend to be highly non-orthogonal.
Unless you've done CPU microcode or programmed a DSP, you probably have
not seen anything like these chips. For example, register selection and
scheduling can interact in such a way that whether two operations can be
scheduled to fire at the same time depends upon the register selections
made for the two operations. I'm not talking about false
data-dependencies, but rather that
fmpy d5,d6,d2 fsub.x d7,d1
is allowed, but
fmpy d3,d6,d2 fsub.x d7,d1
isn't because (d3,d6) isn't among the list of allowed *operand pairs* for
a parallel multiply and subtract. This is an example from the our high
end, "compiler friendly" DSP. It only gets tougher when you move down the
So why bother with compilers at all? Cost and time to market. Some DSP
systems are composed of a "normal" microprocessor and a DSP. If you make
it easy to program the DSP in a high level language, then you make it
easier to do without the microprocessor. Note that the hard real-time part
of the application will, in general, still be hand coded. Compilers help
significantly on the time to market front. The better the compiler, the
greater the portion of the application that can be coded in C. It is
faster to code in a high level language than in assembly.
> - non-procedural languages as e.g. Silage are better ?
I'm unfamiliar with Silage. Where can I find info?
> - no or not enough use of the low-overhead-loop facility of a
> processor as the "DO" for Motorola and the "RPT" for Texas
> Instruments processors ?
Do loops are actually pretty easy. We'll even compute the loop count at
run time and use a do if it's feasible (of course we must also check the
computed value, since a zero operand makes do loop 4 billion times!).
> - no use of special addressing ? (e.g. in circular buffers)
You really need to extend the language if you're going to get at the meat
of the problem. See above.
>I also know of three code generation approaches that would generate more
>optimal code : - The GABRIEL (PTOLEMY) system of prof. Lee at Berkeley
I believe that these systems are basically library goop-together schemes
rather than traditional compilers. From what I understand (and I'm *no*
expert) hand written code fragments are glued together based upon a
graphical representation of DSP operators and data paths. The problem is
that you can't program generally. Maybe tightly coupling a C compiler with
one of these systems is really the way to go. I think that when these
people speak of "global scheduling" they mean scheduling very tiny jobs
over multiple DSPs, rather than moving machine instructions from basic
block to basic block.
Return to the
Search the comp.compilers archives again.