|[3 earlier articles]|
|Re: Beginner help with LALR(1) closure firstname.lastname@example.org (1996-11-14)|
|Re: Beginner help with LALR(1) closure email@example.com (Francisco Arzu) (1996-11-14)|
|Re: Beginner help with LALR(1) closure firstname.lastname@example.org (1996-11-19)|
|Re: Beginner help with LALR(1) closure email@example.com (1996-11-19)|
|Re: Beginner help with LALR(1) closure firstname.lastname@example.org (Gianni Mariani) (1996-12-03)|
|Re: Beginner help with LALR(1) closure email@example.com (1996-12-07)|
|Re: Beginner help with LALR(1) closure firstname.lastname@example.org (1996-12-15)|
|From:||email@example.com (Daniel J. Salomon)|
|Date:||15 Dec 1996 16:12:01 -0500|
|Organization:||Computer Science, University of Manitoba, Winnipeg, Canada|
|References:||96-11-080 96-11-088 96-11-127 96-12-038|
|Keywords:||parse, books, LALR|
Daniel J. Salomon wrote:
> Their method is more efficient than the ones given in the Dragon book,
> but quite complex to understand from their paper.
Gianni Mariani <firstname.lastname@example.org> wrote:
|> Do we care any more ? Are LR(1) tables so big that todays processors
|> are unable to deal with them effectivly ? (seem X lately ?).
The method of DeRemer and Penello produces exactly the same LALR(1)
parse tables, it just does it faster than the algorithms in the Dragon
book. So only compiler implementors would care about this and only if
they were reprocessing their grammars over and over.
Grammars for modern programming languages seem to be getting bigger
than olden day grammars and parser generation time is worse than
linear in the length of the grammar (perhaps as bad as n**2). So
maybe we still need to worry about parser generation speed. However,
since processors are getting faster too, maybe it is all balancing
Daniel J. Salomon -- salomon@cs.UManitoba.CA
Dept. of Computer Science / University of Manitoba
Winnipeg, Manitoba, Canada R3T 2N2 / (204) 474-8687
Return to the
Search the comp.compilers archives again.