Recursive Descent Parsers and YACC

Jeff Prothero <jsp@milton.u.washington.edu>
Fri, 23 Nov 90 12:19:14 -0800

          From comp.compilers

Related articles
[4 earlier articles]
Re: Recursive Descent Parsers and YACC dave@labtam.labtam.oz.au (1990-11-20)
Re: Recursive Descent Parsers and YACC hankd@ecn.purdue.edu (1990-11-17)
Re: Recursive Descent Parsers and YACC henry@zoo.toronto.edu (1990-11-17)
Re: Recursive Descent Parsers and YACC mailrus!sharkey!hela!iti.org!dhw@uunet.UU.NETid AA (1990-11-20)
Recursive Descent Parsers and YACC grosch@gmdka.uucp (Josef Grosch) (1990-11-22)
Re: Recursive Descent Parsers and YACC grimlok@hubcap.clemson.edu (1990-11-23)
Recursive Descent Parsers and YACC jsp@milton.u.washington.edu (Jeff Prothero) (1990-11-23)
Re: Recursive Descent Parsers and YACC melling@psuvax1.cs.psu.edu (1990-11-26)
Re: Recursive Descent Parsers and YACC moss@cs.umass.edu (1990-11-26)
Re: Recursive Descent Parsers and YACC aycock@cpsc.ucalgary.ca (1990-11-26)
| List of all articles for this month |

Newsgroups: comp.compilers
From: Jeff Prothero <jsp@milton.u.washington.edu>
Keywords: parse, yacc, performance
Organization: Compilers Central
Date: Fri, 23 Nov 90 12:19:14 -0800

grosch@gmdka.uucp (Josef Grosch) writes:
>I did a comparison of generated parsers on a SUN 3 with MC 68020 processor
>(excluding scanning):
>
>tool speed (tokens per second)
>
>Yacc 16,000
>Lalr - our LALR(1) parser generator 35,000
>Ell - our LL(1) recursive descent parser generator 55,000
>
>Pure parsing can be made faster by a factor of 2, 3, or more compared to Yacc.


Note that "LALR(1) parser generator" doesn't have to mean "table driven".
Don't have the reference handy, but someone (I believe MetaWare's Tom
Penello?) has obtained big speedups by using procedural implementations of
the LALR(1) tables. Robert Henry has done much the same thing with code
generation tables in CODEGEN (UW technical report), and observed that table
compression and lookup techniques are essentially application-independent,
and that a general tool could be developed to factor, compress, and
implement tables, including generating fast procedural implementations.


Seems to me procedural table implementations bring us full circle: We
started with fast random code, then switched to slower table-driven code as
an aid to portability and code maintainablility. Now that we can generate
the unportable, unmaintainable code automatically from declarative
specifications, the speed consideration is coming to the fore again. No?


Jeff Prothero
jsp@u.washington.edu
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.