Re: Expensive bounds checking (was: Unsafe Optimizations (John D Mccalpin)
Fri, 15 Jun 90 17:20:44 GMT

          From comp.compilers

Related articles
Expensive bounds checking (was: Unsafe Optimizations (1990-06-15)
Re: Expensive bounds checking (was: Unsafe Optimizations (1990-06-15)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (John D Mccalpin)
References: <> <>
Date: Fri, 15 Jun 90 17:20:44 GMT
Organization: College of Marine Studies, Univ. of Delaware
Keywords: superscalar, bounds checking, code, debug
Summary: we need smarter compilers!

In article <> (Jan Prins) writes:
>James Larus writes in <>:
>> [discussion of Gupta's Sigplan90 paper] ... After all optimizations were
>>applied, programs with bounds checkings [still] ran 0-46% slower than
>>programs without bounds checking (down from 78-325% slower). That's a
>>pretty large performance degredation ...

But it us clearly a large improvement over the original case. There
are many cases when I would be willing to suffer a 50% increase in
compute time, and there are relatively few cases for which I would be
willing to suffer a 300% increase in compute time.

>Upcoming superscalar architectures offer an opportunity to perform bounds
>checking concurrently with other operations. Propagating these checks out
>of the critical path of computation (e.g. with techniques such as those
>suggested by Gupta) may yield additional safety with little degradation.

This approach does immediately spring to mind, but it seems a possible waste
of useful resources. On the other hand, for my FP-intensive codes, there
really is not much for the integer unit to do while the FP unit is crunching
away, so it may be no problem. The IBM RS/6000 cpu seems like a good place
to study this.

In any event, I feel that the compile-time options have not yet been fully
explored. On most supercomputers, for example, array bounds checking causes
a tremendous performance degradation, even when the array references are
linear in memory.

In this case, it is pretty easy to see that only so only a few checks are
needed (typically 2 -- one one each end of the vector), and if PARAMETERS
control the loops, then these checks can be done at compile-time!

>It would make a nice change if the next generation of whizbang machines were
>not only faster, but "safer" too!

Of course the language of choice in my field (FORTRAN) allows the user
to stomp all over memory, and a ridiculously large fraction of the large
numerical codes in the world play very loose with array bounds anyway.
Not *my* codes, of course.... :-)
John D. McCalpin
Assistant Professor
College of Marine Studies, U. Del.

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.