|some basic questions about parsing email@example.com (Michael Herstine) (1996-09-15)|
|Re: some basic questions about parsing firstname.lastname@example.org (1996-09-16)|
|Re: some basic questions about parsing email@example.com (1996-09-16)|
|From:||Michael Herstine <firstname.lastname@example.org>|
|Date:||15 Sep 1996 00:39:50 -0400|
I'm starting to learn a bit about compilers and interpreters in
anticipation of an upcoming project where I work, and I'm getting
confused on the basics of parsing. Any references or tips would be
I'm reading Jim Holme's "Object Oriented Compiler Construction". In
the chapter on parsing, he refers to the theorem that to every
language generated by a context free grammar there corresponds a
(possibly non-deterministic) pda that solves the recognition problem
for that language. He notes that we don't want to deal with
non-deterministic automata, and states that instead, we should find
the most powerful (deterministic, I suppose) automaton that we can,
and restrict ourselves to languages that can be recognized by such
automata. He then introduces the shift-reduce PDA, and states without
proof that the set of languages that shift-reduce PDAs can recognize
is called the LR(1) languages (and intimates that the LR(1) languages
live somewhere between regular languages and context free languages).
I have several questions:
1. If we seek the most powerful deterministic automaton possible,
wouldn't that be a Turing machine? Since they can recognize unrestricted
phrase structure grammars, a much larger set of grammars than context
free, why don't we use them as parsers?
2. What sort of grammars generate LR(1) languages?
3. The production rules that he writes down for Pascal are clearly
context free, but seem to include rules that allow non-terminals to go to
the null string - doesn't that put them in the class of unrestricted
phrase structure grammars (they're not allowed in context free grammars,
Thanks for any help,
Return to the
Search the comp.compilers archives again.