Related articles |
---|
Lex and Yacc - Availability? cullvax!drw@EDDIE.MIT.EDU (1987-08-12) |
Re: Lex and Yacc - Availability? harvard!seismo!elsie!ncifcrf!randy (1987-08-17) |
Re: Lex and Yacc - Availability? vern%lbl-helios@lbl-rtsg.arpa (Vern Paxson) (1987-08-19) |
From: | cullvax!drw@EDDIE.MIT.EDU (Dale Worley) |
Newsgroups: | comp.compilers |
Date: | 12 Aug 87 14:27:53 GMT |
Organization: | Cullinet Software, Westwood, MA, USA |
I'm building a compiler for Algol 68, which presents some interesting
tokenizing and parsing problems. Right now, I'm using a p.d. Lex, but
I've heard bad things said about Lex in general, usually that it's
slow. Does anyone out there know of a (semi-)p.d. Lex-type program
that is better? Or, more generally, is there a truly better way to
tokenize?
As far as Yacc goes, it seems to me that the power of LALR vs. LL
parsing, and the fact that it is table-driven are big wins, over and
above the development advantages. (Table-driven gives you: smaller
parsers for large languages, accessibility of the entire parse state
for error diagnostics, ability to build other tools that use the same
tables (e.g., for debugging the grammar)) People like to claim that
Yacc is slow, but has anyone really investigated this?
--
Dale Worley Cullinet Software ARPA: cullvax!drw@eddie.mit.edu
UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw
[Most people I know write lexers by hand, because it's so easy. Lex does indeed
generate big slow lexers -- it's too powerful in the wrong way for lexing most
computer languages. I've also heard that yacc is slow, but have never been
persuaded that it makes any difference. What I'd really like to hear is how
you deal with Algol-68's two-level grammar without expanding it to a context
free grammar the size of a small planet. I've heard of no work on parsing
such grammars directly. -John]
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.