Re: Expensive bounds checking (was: Unsafe Optimizations

mccalpin@vax1.udel.edu (John D Mccalpin)
Fri, 15 Jun 90 17:20:44 GMT

          From comp.compilers

Related articles
Expensive bounds checking (was: Unsafe Optimizations prins@prins.cs.unc.edu (1990-06-15)
Re: Expensive bounds checking (was: Unsafe Optimizations mccalpin@vax1.udel.edu (1990-06-15)
| List of all articles for this month |
Newsgroups: comp.compilers
From: mccalpin@vax1.udel.edu (John D Mccalpin)
References: <1990Jun13.143951.2129@esegue.segue.boston.ma.us> <14697@thorin.cs.unc.edu>
Date: Fri, 15 Jun 90 17:20:44 GMT
Organization: College of Marine Studies, Univ. of Delaware
Keywords: superscalar, bounds checking, code, debug
Summary: we need smarter compilers!

In article <14697@thorin.cs.unc.edu> prins@prins.cs.unc.edu (Jan Prins) writes:
>James Larus writes in <1990Jun14.152939.2578@esegue.segue.boston.ma.us>:
>
>> [discussion of Gupta's Sigplan90 paper] ... After all optimizations were
>>applied, programs with bounds checkings [still] ran 0-46% slower than
>>programs without bounds checking (down from 78-325% slower). That's a
>>pretty large performance degredation ...


But it us clearly a large improvement over the original case. There
are many cases when I would be willing to suffer a 50% increase in
compute time, and there are relatively few cases for which I would be
willing to suffer a 300% increase in compute time.


>Upcoming superscalar architectures offer an opportunity to perform bounds
>checking concurrently with other operations. Propagating these checks out
>of the critical path of computation (e.g. with techniques such as those
>suggested by Gupta) may yield additional safety with little degradation.


This approach does immediately spring to mind, but it seems a possible waste
of useful resources. On the other hand, for my FP-intensive codes, there
really is not much for the integer unit to do while the FP unit is crunching
away, so it may be no problem. The IBM RS/6000 cpu seems like a good place
to study this.


In any event, I feel that the compile-time options have not yet been fully
explored. On most supercomputers, for example, array bounds checking causes
a tremendous performance degradation, even when the array references are
linear in memory.


In this case, it is pretty easy to see that only so only a few checks are
needed (typically 2 -- one one each end of the vector), and if PARAMETERS
control the loops, then these checks can be done at compile-time!


>It would make a nice change if the next generation of whizbang machines were
>not only faster, but "safer" too!


Of course the language of choice in my field (FORTRAN) allows the user
to stomp all over memory, and a ridiculously large fraction of the large
numerical codes in the world play very loose with array bounds anyway.
Not *my* codes, of course.... :-)
--
John D. McCalpin mccalpin@vax1.udel.edu
Assistant Professor mccalpin@delocn.udel.edu
College of Marine Studies, U. Del. mccalpin@scri1.scri.fsu.edu
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.