|RE: LR-parser-based lexical analysis - does it work? email@example.com (Quinn Tyler Jackson) (2002-10-18)|
|From:||"Quinn Tyler Jackson" <firstname.lastname@example.org>|
|Date:||18 Oct 2002 23:07:52 -0400|
|Posted-Date:||18 Oct 2002 23:07:52 EDT|
> * What is known on performance drawbacks of LR-parsing-based
> scanners vs. DFA-based scanners?
Well, when I compared timings of an interpreted scannerless LL(k) core
engine to a compiled scanner-style LR(1) core engine, parsing a small
subset of C++, I had these timing results:
$-grammar 1had adaptive features, $-grammar 2 had no adaptive
Keep in mind that the LR(1) engine was 100% table-driven and compiled,
whereas the $-grammars were instantiated at run-time and interpreted
from what amounted to byte-code, with much of the profiled overhead
residing in the interpretation processes.
(Note: 4, 8, 16, 32 = number of classes parsed. Each class had 16
member functions, a ctor, and a dtor. Timings shown are on a 733 MHz
Win2K box, 256 megs RAM, "real time" thread priority set (to reduce
task switching). Metric shown is 1000's of CPS/parsed.)
Hope that is of some use.
Quinn Tyler Jackson
Return to the
Search the comp.compilers archives again.