Re: Crenshaw's Tutorial

Jack Crenshaw <jcrens@earthlink.net>
5 Feb 2000 18:52:31 -0500

          From comp.compilers

Related articles
Crenshaw's Tutorial colin@bentanimation.com (Colin Doncaster) (2000-01-19)
Re: Crenshaw's Tutorial rkrayhawk@aol.com (2000-01-21)
Re: Crenshaw's Tutorial jcrens@earthlink.net (Jack Crenshaw) (2000-02-05)
Re: Crenshaw's Tutorial joachim.durchholz@halstenbach.com.or.de (Joachim Durchholz) (2000-02-10)
Re: Crenshaw's Tutorial gneuner@dyn.com (2000-02-12)
Re: Crenshaw's Tutorial alanf@ns.net (Alan Fargusson) (2000-02-15)
Re: Crenshaw's Tutorial rhyde@shoe-size.com (Randall Hyde) (2000-02-15)
Re: Crenshaw's Tutorial joachim.durchholz@halstenbach.com.or.de (Joachim Durchholz) (2000-02-17)
Re: Crenshaw's Tutorial david.thompson1@worldnet.att.net (David Thompson) (2000-02-21)
[2 later articles]
| List of all articles for this month |
From: Jack Crenshaw <jcrens@earthlink.net>
Newsgroups: comp.compilers
Date: 5 Feb 2000 18:52:31 -0500
Organization: Compilers Central
References: 00-01-073
Keywords: courses

Colin Doncaster wrote:
>
> I've just started going through Crenshaw's tutorial on compiler
> construction and had a few questions to ask. First of all, it seems
> what is presented here is a full C method of implementing a compiler -


There seems to be a lot of confusion lately about my tutorial series,
so maybe it's about time I set the record straight on a few points.


First, the tutorial is designed for the use of Pascal, not C. If you
ran into a tutorial presenting a C implementation, it definitely
wasn't mine. Second, the method uses recursive descent parsing, which
is exactly the opposite of the LALR methods used in yacc and lex (or
lexx and bison). Recursive descent typically uses the implicit stack
associated with function calls, and therefore needs no externally
generated structure. LALR methods use a shift-reduce method and an
automatically generated state-machine table.


While it may be true (though arguable) that the LALR methods generate
more efficient compilers (though not necessarily more efficient object
code), it is extremely difficult to look at the shift-reduce table and
see any connection between it and the syntax of the language it
implements.


The purpose of my tutorial was to instruct, and for that purpose the
recursive descent method is infinitely superior. In fact, I would go
so far as to say that anyone who uses yacc and lex for compiler
construction hasn't really learned compiler construction. What they
have learned is how to run yacc and lex. With recursive descent, the
way a parser breaks down the source language into syntax elements, and
the code it generates, is immediately transparent. You certainly
can't say the same for table-driven compilers.


One big advantage of an LALR parse is that it can handle syntax
constructs that LL (recursive-descent) parsers cannot. That is true.
But these constructs rarely show up in practice, and there is at least
a reasonable argument that if you're designing a programming language,
you would be better off to choose one that _IS_ easily parsed. That
is the general philosophy behind all of Wirth's languages, and it is
no accident that Pascal compilers generally run twice or more as fast
as equivalent C compilers.


Finally, RD parsers are generally acknowledged to provide for better
and more meaningful error detection and reporting. In an RD parser,
it is easy to see what part of the parse the error appeared at, and
what the probably error is. This is much more difficult in a
table-driven parser, in which all one can say is that I got an illegal
state change.


While I'm on the subject, I saw recently someone complaining about the
inefficient code generated by my methods. That's by design, not by
omission. Remember, the purpose of the tutorials is not to teach how
to make the most efficient compiler, but how to understand the
mechanisms of compiler behavior. Whenever a decision was made as to a
given implementation, the decision was always made in favor of simple
and transparent parsing modules and code generators, not on tricky
code optimizations.


In the series (which was never finished), I promised to include
optimization issues near the end. To quote New Zealand compiler
expert, John Spray, the best way to optimize code is not to generate
non-optimal code in the first place. Thus John seeks to look for
simple optimizations _BEFORE_ the code is emitted, not after.


In my tutorial, I gave two hints as to how to do this easily. The
first, which someone in this NG mentioned, is to use the multiple
registers of a CPU like the 68000, to act as a small mini-stack.
Thus, for example, instead of pushing and popping every argument in an
expression, we allow the compiler to manage its own "stack" consisting
of, say, the first five or six levels implemented in registers. It is
trivially easy to get the compiler to do this, and switch to the RAM
stack if the push-down level exceeds that supported by the CPU
registers.


Doing this gives you a sort of poor man's register optimization, and
somewhat more efficient computation, at virtually no expense in
complexity in the compiler.


A more serious efficiency issue is that of constant arithmetic.
Currently, a statement like:


x = 2*3 + 4


would result in the arithmetic being done in registers and stack, at
execution time. Even the simple statement,


x = 1


Would result in something like


  MOV D1, #1
  MOV (X), D1


when a simple move directly to X would do.


These inefficiencies are also easy to deal with. In all of my
examples, I make no distiction between a literal constant and a
variable name. But that decision was made solely to keep the compiler
system. It would be easy enough to add a second parser, one to
simplify constant expressions at compile time, and to store only the
result.


The reason the compiler generates bad code for constant expressions is
because it does not do lookahead, other than the single-character
lookahead inherent in LL(1). Each syntax element is processed as it
is received, and the code is generated immediately. An easy fix (but
one that requires a larger and more complex parser) is to defer code
generation until the entire RHS is complete. The most effective way
to generate efficient code is via an intermediate tree structure for
the expression. I am not talking about an entire tree-structured IL,
just a temporary tree sufficient to hold an entire RHS expression.
Once the tree is complete, it's a fairly simple matter to build an
optimizing engine that walks the tree, and simplifies the expression
before emitting any actual code. To do so requires only that we save,
along with the leaves of the tree, information about the data types,
such as constant or variable, scalar or array, etc. A good
tree-walker would be one that rearranged the tree to group all the
constants together, and reduce them to a single one where possible.
With such a tree, the example above,


x = 2*3+4


would be reduced to the simple


MOV D1, #10


Jack


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.