|new language syntax email@example.com (Aleksey Beregov) (2001-08-24)|
|Re: new language syntax firstname.lastname@example.org (2001-08-25)|
|Re: new language syntax email@example.com (2001-08-25)|
|Re: new language syntax firstname.lastname@example.org (2001-09-21)|
|Re: new language syntax email@example.com (HSM) (2001-09-25)|
|Re: new language syntax firstname.lastname@example.org (Ralph Boland) (2001-09-26)|
|Re: new language syntax email@example.com (Joachim Durchholz) (2001-10-06)|
|Re: new language syntax firstname.lastname@example.org (2001-10-20)|
|Re: new language syntax email@example.com (Chris F Clark) (2001-11-05)|
|From:||"Joachim Durchholz" <firstname.lastname@example.org>|
|Date:||6 Oct 2001 16:35:11 -0400|
|References:||01-08-138 01-08-144 01-09-097 01-09-120|
|Posted-Date:||06 Oct 2001 16:35:11 EDT|
Ralph Boland <email@example.com> wrote:
> The parser generated should be LALR(1) or LR(1). (A Must) There
> shouldn't be any clever tricks to get around grammars that are not
> LR(1). If I can't write a LR(1) grammar for my application I lose.
I do not understand what you mean here. Whether you can write an LR(1)
parser or not is entirely up to your experience with defining
languages and writing grammars for them; the capabilities of the
parser generator don't enter here. The parser generator can support
you by giving useful error messages for the conflicts. (This is
surprisingly hard, and one of the reasons I have started to consider
alternatives to LALR parsers.)
> The scanner table should be either a finite state machine or be a LR
> based parser table.
LR parser tables *are* the transition tables for a finite state
machine. (An LR parser just runs a set of FSMs in parallel, one for
each nonterminal that's currently open.)
> The scanner and parser should read these files in order to do their
> work. Thus they do not need to change when the scaner/parser tables
> etc. are generated for a new language.
I'm not sure whether this is a sensible requirement. If you're doing a
compiler, you'll have statically linked code for generating semantic
actions anyway. (Unless you're doing *very* clever things like
plugging in external modules for said semantic actions. Could be
interesting.) The downside of external files is that they must be
kept consistent with each other and with the application proper. The
users will need an installation program to get things working
properly; getting this right requires some forethought.
> The idea is to be able to use syntactic information about syntacticly
> correct programs to do things like language based editing, data
> compression, version control etc., systematically and efficiently
> using language specific information.
That's a reasonable thing to do. If you want to be able to handle
various languages, the approach based on external files makes sense.
However, you'll still have semantic actions.
Return to the
Search the comp.compilers archives again.