|Interprocedural Pointer Tracking firstname.lastname@example.org (1995-11-29)|
|Re: Interprocedural Pointer Tracking bwilson@shasta.Stanford.EDU (Bob Wilson) (1995-11-30)|
|From:||Bob Wilson <bwilson@shasta.Stanford.EDU>|
|Date:||Thu, 30 Nov 1995 18:28:26 GMT|
email@example.com (Robert Metzger) wrote:
> I agree about numerical programs that use blocks of contiguous, homogeneous
> storage (Simulating arrays with pointers).
> I am not convinced about recursive data structures. There are many
> interesting papers, but I don't know of an implementation that can handle
> large programs (say GCC) efficiently.
We're getting closer to handling big programs but GCC is beyond the
state of the art. Actually, the style of code is more of an issue
than the program size. GCC is extremely hard to analyze because it
uses its own heap memory manager, it makes assumptions about the
layout of the stack to implement its own alloca(), it uses many
recursive data structures and procedures, etc. Even if someone
does succeed in analyzing GCC, the results will not be very accurate
due to these features.
You also have to make a distinction between different ways of analyzing
recursive data structures. Most of the interesting papers that you
mention discuss ways to identify particular kinds of data structures,
like lists and trees. That is really a separate (and much harder)
problem than finding the potential targets of the pointers.
> Your predictive powers are less able than your compiler skills. :-)
> Commercial compilers HAVE HAD interprocedural pointer analysis since
> the production release of the Convex Application Compiler back in May 1991.
> You can find a discussion of it in:
> Pointer Target Tracking: An Empirical Study
> J. Loeliger, R. Metzger, M. Seligman and S. Stroud
> Proceedings of Supercomputing '91, IEEE
> pages 14-23
Oops, sorry. I was thinking of the more recent analyses that handle
a broader class of programs. You make a good point, though. If your
goal is to analyze Fortran-style C programs, it might make more sense
to use something like your approach that gives good results without
all the extra complexity.
> As for speed, by the time I had done the third(!) implementation, I
> had gotten the cost down to a few minutes for a 30,000 line numerical
> C application. That was on a Convex C-220 in 1991, which is at least
> an order of magnitude slower for this type of code than the best
> workstation you can buy today.
I think there is something about IPA that requires multiple
reimplementations.... :-( I've lost count of how many times I've
had to rip out major sections of my analysis and rewrite them.
Return to the
Search the comp.compilers archives again.