Related articles |
---|
Good explanation of Recursive Ascent Parsing wanted aaronngray@gmail.com (Aaron Gray) (2022-09-28) |
Re: Good explanation of Recursive Ascent Parsing wanted 864-117-4973@kylheku.com (Kaz Kylheku) (2022-09-29) |
Re: Good explanation of Recursive Ascent Parsing wanted aaronngray@gmail.com (Aaron Gray) (2022-09-29) |
Re: Good explanation of Recursive Ascent Parsing wanted 864-117-4973@kylheku.com (Kaz Kylheku) (2022-09-29) |
Re: Good explanation of Recursive Ascent Parsing wanted anton@mips.complang.tuwien.ac.at (2022-09-30) |
Re: Good explanation of Recursive Ascent Parsing wanted johann@myrkraverk.invalid (Johann 'Myrkraverk' Oskarsson) (2022-09-30) |
Re: Good explanation of Recursive Ascent Parsing wanted antispam@math.uni.wroc.pl (2022-10-06) |
Re: Good explanation of Recursive Ascent Parsing wanted 864-117-4973@kylheku.com (Kaz Kylheku) (2022-10-07) |
From: | anton@mips.complang.tuwien.ac.at (Anton Ertl) |
Newsgroups: | comp.compilers |
Date: | Fri, 30 Sep 2022 08:05:43 GMT |
Organization: | Institut fuer Computersprachen, Technische Universitaet Wien |
References: | 22-09-018 |
Injection-Info: | gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="23058"; mail-complaints-to="abuse@iecc.com" |
Keywords: | parse, architecture, LALR |
Posted-Date: | 30 Sep 2022 21:33:24 EDT |
>[Maybe it'll even be slower since
>there is a lot more code so it's less likely to fit in a CPU cache.
>-John]
My experience is: Don't worry about I-cache misses until you have
them. I have seen cases where a lot of code was produced, but it had
enough temporal and spatial locality that I-cache misses were no
problem, as well as a case where we applied a technique that reduced
the code size (with the intent of reducing I-cache misses), but as a
result we saw a slowdown from increased I-cache misses, because the
resulting code had worse spatial locality.
Looking at the numbers in 'Thomas J Penello (1986). "Very fast LR
parsing"', one need not worry much about the I-cache: He describes an
experiment with a grammar with 126 productions, resulting in 192
states, 305 terminal transitions and 226 nonterminal transitions in
the LALR(1) parser; the resulting parse tables for the table-driven
parser has 2022 bytes, while the recursive-ascent parser has 5602
bytes (including 397 bytes in common with the table-driven parser).
I-cache sizes these days are typically 32KB, so this parser would be
completely within the I-cache (even if we assume a factor of two from
going from 80286 code to AMD64 code), so one does not even need to
think about locality. Penello also reports that a 158-production
grammar for standard Pascal with a few minor extensions yielded a
5326-line assembly program (which MASM 3.0 failed to assemble); this
should also easily fit into 32KB.
Concerning recursive ascent, my dim memory of the article I read about
it a long time ago says that on reduction a routine does often not
return to its parent caller, but to some ancestor. Since the
introduction of the return stack predictor in microarchitectures in
the mid-1990s, implementing that directly causes at least one
misprediction (~20 cycles), and possibly there are followup
mispredictions from having the predictor stack go out-of-sync with the
real returns. Looking at the Wikipedia page, it says that on
reduction a counter is returned, so you take the returns one at a time
and count down until you arrive at the desired level. This costs some
cycles, too.
Overall the performance of recursive ascent relative to table-driven
is less advantageous than it may have been in the 1980s (before branch
prediction and I-caches). Penello reports a speedup by a factor 5.5
for the recursive-ascent parser (which uses assembly language) over
"an interpretive LR parser written in Pascal and compiled by a good
optimizing Pascal compiler" on an 80286. It would be interesting to
see a performance comparison between todays Bison skeletons compiled
with a present-day optimizing C compiler and a recursive ascent parser
on present-day hardware.
- anton
--
M. Anton Ertl
anton@mips.complang.tuwien.ac.at
http://www.complang.tuwien.ac.at/anton/
Return to the
comp.compilers page.
Search the
comp.compilers archives again.