Related articles |
---|
RE: LR-parser-based lexical analysis - does it work? qjackson@shaw.ca (Quinn Tyler Jackson) (2002-10-18) |
From: | "Quinn Tyler Jackson" <qjackson@shaw.ca> |
Newsgroups: | comp.compilers |
Date: | 18 Oct 2002 23:07:52 -0400 |
Organization: | Compilers Central |
Keywords: | lex, parse |
Posted-Date: | 18 Oct 2002 23:07:52 EDT |
In-reply-to: | 02-10-030 |
> * What is known on performance drawbacks of LR-parsing-based
> scanners vs. DFA-based scanners?
Well, when I compared timings of an interpreted scannerless LL(k) core
engine to a compiled scanner-style LR(1) core engine, parsing a small
subset of C++, I had these timing results:
http://members.shaw.ca/qjackson/Parsing/ag_009.gif
$-grammar 1had adaptive features, $-grammar 2 had no adaptive
features.
Keep in mind that the LR(1) engine was 100% table-driven and compiled,
whereas the $-grammars were instantiated at run-time and interpreted
from what amounted to byte-code, with much of the profiled overhead
residing in the interpretation processes.
(Note: 4, 8, 16, 32 = number of classes parsed. Each class had 16
member functions, a ctor, and a dtor. Timings shown are on a 733 MHz
Win2K box, 256 megs RAM, "real time" thread priority set (to reduce
task switching). Metric shown is 1000's of CPS/parsed.)
Hope that is of some use.
--
Quinn Tyler Jackson
http://members.shaw.ca/qjackson/
http://members.shaw.ca/jacksonsolutions/
Return to the
comp.compilers page.
Search the
comp.compilers archives again.