|INTERMEDIATE CODE DESIGN email@example.com (1995-06-24)|
|Intermediate code design firstname.lastname@example.org (David Stone 5887) (1998-01-05)|
|Re: Intermediate code design email@example.com (1998-01-06)|
|Re: Intermediate code design firstname.lastname@example.org (David Chase) (1998-01-06)|
|Re: Intermediate code design email@example.com (Charles Fiterman) (1998-01-08)|
|Re: Intermediate code design firstname.lastname@example.org (Christian von Roques) (1998-01-08)|
|From:||Christian von Roques <email@example.com>|
|Date:||8 Jan 1998 00:54:24 -0500|
David Stone 5887 <firstname.lastname@example.org> writes:
> (1) Updating operations.
Having special constructs for updating operations might be practical
for simple non-optimizing compilers, but as soon as you start doing
data-flow analyses you'll regain that information anyway and any
additional construct for updating operations will just be a waste of
effort. It is a good design goal for an intermediate representation
to specify as few as necessary and not more.
> (2) Function calls, especially pure function calls
> [interprocedural dataflow analysis through lisp-like argument lists]
> Here all the above [cons] operations can be treated like simple
> arithmetic operations for the purpose of dataflow analysis.
> However, I have not read of this approach anywhere: is there some
> hidden snag, or is there a better approach?
The style of argument-lists is an implementation issue. It isn't
fundamentally harder or easier to do data-flow analysis on lists or
n-ary operations. Several intermediate representation I know use the
approach of lisp-like argument lists [MOBIL of Mocka, CCMIR of the
Compare project, ...], they do not gain much, but loose easy random
access from this restriction to fixed arity operations. Personally I
prefer n-ary operations as they are easy, fast to implement, and fit
the n-ary operations they model. Unless there is a good reason for a
change in models one shouldn't change it.
> Any pointers to articles or books would be very welcome.
You should take a look at the following two articles [1,2]. The
intermediate representations they describe reduce the amount of
implicit (over-)specification, mostly order and memory constraints.
This makes correct manipulations much easier, as it is not necessary
to check whether any implicitly specified restraint really is
necessary or can be ignored.
If you're planning to design an intermediate representation for an
optimizing compiler you definitely should read about, and probably
use, the ideas behind SSA and explicit modeling of fine-grained
storage dependencies. [6,7] For an easy introduction see  which
nicely handles the case of reducible CFGs.
For the intermediate representation and optimizer of Fiasco (Fiasco Is
A Sather COmpiler), we settled for a design similar to Click's. It
was easy to implement inlining, global code motion and value numbering
. The compiler usually gets *faster* if these are enabled, because
the time used for optimization is overcompensated by the time saved by
generating less code with the backend.
The compiler is written in ANSI-C, used for research, teaching
compiler construction, and more. It is freely available via anonymous
ftp from i44ftp.info.uni-karlsruhe.de/pub/sather/ . You probably just
want to take a look at the ir/ subdirectory of the sources, these less
than 5800 lines of code contain the complete IR-handling, including
the inliner, optimizer and an interpreter for a trivial stack-based
language to generate IR for the `builtin' primitive operations, so
that they can be implemented in the standard library and need not be
hardcoded in the compiler itself.
Hope this helps
: `A Simple Graph-Based Intermediate Representation'
Cliff Click and Michael Paleczny
SIGPLAN '95 Workshop on Intermediate Representations, vol.30 #3 pp.35--49
: `Value Dependence Graphs: Representation Without Taxaion'
Daniel Weise, Roger F. Crew, Michael Ernst and Bjarne Steensgaard
21th Annual Symposium on Principles of Programming Languages
: `Global Code Motion, Global Value Numbering' by Cliff Click
: `The program dependence web: A representation supporting control-,
data- and demand-driven interpretation of imperative languages'
R. A. Ballance and A. B. Maccabe and K. J. Ottenstein
SIGPLAN '90 Conference on Programming Languages Design and Implementation
: `Optimizing Compilers for Structured Programming Languages'
PhD-Thesis of Marc Michael Brandis
ETH Zuerich 1995
: `Automatic construction of sparse dataflow evaluation graphs' by
J.-D. Choi and R. Cytron and J. Ferrante in 19th Annual Symposium on
Principles of Programming Languages, Jan. 1991
: `Efficiently computing static single assignment form and the
control dependence graph' by Ron Cytron, Jeanne Ferrante, Barry
K. Rosen, Mark N. Wegman and F. Kenneth Zadeck in ACM Transactions on
Programming Languages and Systems, 1991 vol.13 #4 pp.451--490
Return to the
Search the comp.compilers archives again.