|Bounds checking firstname.lastname@example.org (1990-06-15)|
|Re: Bounds checking max@Neon.Stanford.EDU (Max Hailperin) (1990-06-15)|
|From:||Max Hailperin <max@Neon.Stanford.EDU>|
|Date:||Fri, 15 Jun 90 23:28:44 GMT|
|Organization:||Computer Science Department, Stanford University|
In article <1990Jun15.email@example.com> firstname.lastname@example.org
(Peter Klausler) writes:
>> Upcoming superscalar architectures offer an opportunity to perform bounds
>> checking concurrently with other operations.
>Yes. They also offer an opportunity to perform useful work concurrently with
>other operations. Which would a real customer, charged by the cycle, prefer?
>> It would make a nice change if the next generation of whizbang machines were
>> not only faster, but "safer" too!
>I agree; but bounds checking is computation, and is not going to come for
Perhaps this is too obvious a point to bother making, but compilers
can't always find "userful" work (accepting the definition of checking
as useless) to overlap. Even the best code schedulers, even with lots
of registers, etc., can't always fill every slot by a long shot. Last
I heard, the results from IBM's experience with their XL compilers
(which are pretty state of the art) and RS/6000 architecture was that
the floating point unit almost always lagged the integer (on
scientific codes). Since bounds checking would be on the integer
side, this implies it could be squeezed in *without* displacing
"useful" work. Moreover, it shouldn't be that tough to implement a
"check where free" mode, intermediate between checking on and checking
off, which caused the checking code to be emitted only where the
scheduler can fit it in without displacing "useful" work, and omitted
elsewhere. Then it would indeed be fair to say that the new
generation of technology had brought increased safety along with
higher speed. Of course, it would still pay to optimize as much of
the checking away as possible, to improve the chances that what
remains would fit the gaps in the schedule. In addition to questions
about how one would actually implement such a scheme in a compiler,
it is interesting to ponder what the functionality should be. In particular,
it seems that sometimes you might be able to decet the bounds error for
free only if you were willing to do so post-facto. Is that still worth
doing? (Probably.) Also, someone will have to think hard about when
and whether "free" operations are really free given memory-system costs.
Since the designers of super-scalar machines are hoping that the compiler
will be able to fill all the slots with real work, they do try to provide
sufficient memory bandwidth for that, but I don't know for sure that
they always succeed 100% including things like cache misses and page faults.
To add emphasis to Klausler's phrase, users are charged by the *cycle*,
not the active functional-unit cycle, and that means that it may be
possible to sneak some, perhaps most, bounds checking code in for free.
Return to the
Search the comp.compilers archives again.