|GLR tools? email@example.com (1992-05-12)|
|From:||firstname.lastname@example.org (Stephen P Spackman)|
|Keywords:||parse, tools, question|
|Organization:||University of Chicago CILS|
|Date:||Tue, 12 May 1992 00:10:07 GMT|
Has anyone tried to build CS-style parsing tools around GLR (Tomita,
corrected Tomita) parsers? We use them in linguistics but they always seem
to be implemented on top of Common Lisp which makes it hard to judge their
speed relative to yaccoids.
(A GLR parser is essentially an (S?)LR parser that uses graph-structured
state to cope with nondeterminism. This means that it has (in the
corrected algorithm, and if I understand the result properly) O(n^3) worst
case performance *but* (a) should perform like an SLR parser on SLR input,
and (b) computes all parses of an ambiguous input, even regular infinite
ones (as a cyclic parse "tree"). They also adapt well to unification-style
attribution schemes, of course, and these are often used in linguistics to
kill unwanted parses. Reference is something like M. Tomita ed.
_Generalised LR Parsing_, Kluwer '91, but I don't have a copy in front of
me to check.)
Having a tool that handles anything CF efficiently (and in a nice cleanly
engineered way, which is more important) would be a boon at times....
stephen p spackman Center for Information and Language Studies
email@example.com University of Chicago
Return to the
Search the comp.compilers archives again.