|yacc parse tree firstname.lastname@example.org (Josh Adams) (1998-12-10)|
|Re: yacc parse tree email@example.com (Simon Cozens) (1998-12-13)|
|Re: yacc parse tree firstname.lastname@example.org (1998-12-13)|
|Re: yacc parse tree email@example.com (1998-12-18)|
|Re: yacc parse tree firstname.lastname@example.org (1998-12-18)|
|Re: yacc parse tree email@example.com (Josh Adams) (1998-12-19)|
|Re: yacc parse tree firstname.lastname@example.org (Joachim Durchholz) (1998-12-19)|
|Re: yacc parse tree email@example.com (Sukru Cinar) (1998-12-21)|
|Re: yacc parse tree firstname.lastname@example.org (Sukru Cinar) (1998-12-22)|
|[3 later articles]|
|From:||email@example.com (M Sweger)|
|Date:||13 Dec 1998 13:54:39 -0500|
|Organization:||DataHaven Project, Inc (http://www.dhp.com)|
Josh Adams (firstname.lastname@example.org) wrote:
: I'm implementing an SQL "where" clause. To this end, I modified the
: SQL syntax checker in the O'Reilly Lex & Yacc book so that I have a
: working where clause syntax checker. I imagine that I will need to
: create some sort of parse tree consisting of nodes and leaves that
: represent ands, ors, comparators, literals, column names, etc. Any
: advice on how to do this? The book does not give an example of parse
: tree implementation.
I have the same book. Will there be a 2nd edition coming out? Perhaps
some examples on crude interpreters vs. compilers at least the AST?
What about some examples on some common solutions using LEx& Yacc in
common languages such as Fortran with its its column specific
requirements, Typeless languages etc? This would simplify the Dragon
Book since I find it hard to believe that one would implement the
algorithms in the Book verbatim. I think the examples in the Dragon
Book is sort of pseudo code to help you get the basic idea but not
implemented directly as written. Another particular area of not being
sure of how it's done is all the Basic block code optimization. I can
understand a good bit of it, but when it comes to reality of it being
done in source code it would seem that it would be an awfully slow
compiler if using the algorithms.
I'd like to see in the Lex&Yacc book the Unix "cut and paste" utility
whereby you're cutting or pasting a specific field from a data file
using the token specified on the command line. This looks like Fortran
parsing which is column specific and also wherey the token feed to Lex
is dynamic instead of being preprogrammed in the Lex file.
Another Unix command similar to this is "grep". In this case the
regular expression on the command line is passed to a program (Lex
based) and parses a data file for matches. In this case the Lex tokens
would be dynamic since they are passed to the Lex routine instead of
being programmed in.
In cases above it sure would be nice to see in an updated book.
BTW, I did look for the information on "Early Algorithms" for dynamic
grammars that was popular in the 1970's by searching dejanews in the
comp NG's but the dejanews archives don't go back that far. Could you
specify an author or reference book as a starting point into this
And, please write a 2nd, 3rd.... edition to O'Reillys Lex&Yacc book
it was big help to get started.
[Someday I'll update lex&yacc and say more about what you do once you've
parsed stuff, although it's not on the calendar any time soon. Don't
hold your breath waiting for cut, paste, and grep, though. Lex is very
ill-suited for fixed column parsing; when I wrote a Fortran compiler I
used a yacc parser but a hand coded lexer with a lot of feedback from
the parser. grep is unrelated to lex even though the expression languages
are similar, lex compiles the patterns to a DFA while the various greps
either use an NFA (egrep and plain grep) or ad-hoc fixed string matching
(fgrep). And that's Earley's Algorithm, named after the guy who invented
it, see http://www.icsi.berkeley.edu/~stolcke/papers/cl95/node4.html
Return to the
Search the comp.compilers archives again.