Parse tables as code (WAS: LL(1) vs LALR(1) parsers)

pardo@cs.washington.edu (David Keppel)
9 Dec 1995 19:12:38 -0500

          From comp.compilers

Related articles
Re: LL(1) vs LALR(1) parsers ddean@dynastar.cs.princeton.edu (1995-11-28)
Re: LL(1) vs LALR(1) parsers pardo@cs.washington.edu (1995-11-29)
Re: LL(1) vs LALR(1) parsers bridges@cs.arizona.edu (1995-12-01)
Parse tables as code (WAS: LL(1) vs LALR(1) parsers) pardo@cs.washington.edu (1995-12-09)
| List of all articles for this month |
From: pardo@cs.washington.edu (David Keppel)
Newsgroups: comp.compilers
Date: 9 Dec 1995 19:12:38 -0500
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: 95-11-230 95-11-242 95-12-022
Keywords: parse, LALR, performance, bibliography

>pardo@cs.washington.edu writes:
>>[Penello, "Very Fast LR Parsing", CCC'86, pp. 145-151]


bridges@cs.arizona.edu (Patrick Bridges) writes:
>[Someone implemented Penello's scheme inside of Bison. Parsing was
> 3x-6x faster but the parsers were enormous because the tables were
> encoded as instructions.]


Fraser and Henry observed much the same in their parsing code generator.
They used a variety of table compaction techniques in order to reduce
the table size. For example, identifying similar state sets and then
merging them, with the non-matching entries replaced with a test. From
the technical report:


"Table compaction is vital: for the VAX, the uncompacted data
for _each_ binary operator would take over 2MB. Even with
compaction, a typical BURS code generator for the VAX takes
over 43KB, and further reductions are generally regarded as
deisrable. Compaction complicates table interpretation: what
is logically a simple three-deimensional array access ends up
taking about 40 VAX instructions.


This paper shows that it is better to represent BURS tables
with a combination of hard code and data. ... Predictably, the
hard-coded code generator is faster. A typical compilation
with interpreted tables took 49 VAX instructions/node for
labelling and 188 more to identify the code to output. With
hard code, these dropped to 13 and 35.


Less predictably, hard-coding saves space. The smallest
hard-coded BURS code generator for the VAX takes 21.4KB, less
than half of its interpreted predecessor. Hard-coding exposed
important opportunities for compression that were previously
hidden in the tables, and it allowed tailoring of encodings
for the values at hand."


The technical report ends with


"These tricks apply to more than just BURS tables, and the hard
code generated to support them resembles that generated for
very different compiler tables [Penello 86, ...]. It is thus
perhaps time for a general-purpose table encoder, which could
accept tables and automatically perform some of the
transformations above."


The references I have are:


%A Christopher W. Fraser
%A Robert R. Henry
%T Hard-Coding Bottom-Up Code Generation Tables to Save Time and Space
%J Software \- Practice and Experience
%V 21
%N 1
%D January 1991
%P 1-12


%A Christopher W. Fraser
%A Robert R. Henry
%T Hard-Coding Bottom-Up Code Generation Tables to Save Time and Space
%R 90-01-03
%I University of Washington, Department of Computer Science and
Engeineering
%C Seattle, Washington
%W tr-request@cs.washington.edu
%D January 1990


All very interesting stuff.


;-D on ( Table-driven research ) Pardo
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.