Re: Good explanation of Recursive Ascent Parsing wanted

antispam@math.uni.wroc.pl
Thu, 6 Oct 2022 15:30:53 -0000 (UTC)

          From comp.compilers

Related articles
Good explanation of Recursive Ascent Parsing wanted aaronngray@gmail.com (Aaron Gray) (2022-09-28)
Re: Good explanation of Recursive Ascent Parsing wanted 864-117-4973@kylheku.com (Kaz Kylheku) (2022-09-29)
Re: Good explanation of Recursive Ascent Parsing wanted aaronngray@gmail.com (Aaron Gray) (2022-09-29)
Re: Good explanation of Recursive Ascent Parsing wanted 864-117-4973@kylheku.com (Kaz Kylheku) (2022-09-29)
Re: Good explanation of Recursive Ascent Parsing wanted anton@mips.complang.tuwien.ac.at (2022-09-30)
Re: Good explanation of Recursive Ascent Parsing wanted johann@myrkraverk.invalid (Johann 'Myrkraverk' Oskarsson) (2022-09-30)
Re: Good explanation of Recursive Ascent Parsing wanted antispam@math.uni.wroc.pl (2022-10-06)
Re: Good explanation of Recursive Ascent Parsing wanted 864-117-4973@kylheku.com (Kaz Kylheku) (2022-10-07)
| List of all articles for this month |
From: antispam@math.uni.wroc.pl
Newsgroups: comp.compilers
Date: Thu, 6 Oct 2022 15:30:53 -0000 (UTC)
Organization: Aioe.org NNTP Server
References: 22-09-018 22-09-024
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="59293"; mail-complaints-to="abuse@iecc.com"
Keywords: architecture, performance
Posted-Date: 07 Oct 2022 13:04:49 EDT

Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
> >[Maybe it'll even be slower since
> >there is a lot more code so it's less likely to fit in a CPU cache.
> >-John]
>
> My experience is: Don't worry about I-cache misses until you have
> them. I have seen cases where a lot of code was produced, but it had
> enough temporal and spatial locality that I-cache misses were no
> problem, as well as a case where we applied a technique that reduced
> the code size (with the intent of reducing I-cache misses), but as a
> result we saw a slowdown from increased I-cache misses, because the
> resulting code had worse spatial locality.
>
> Looking at the numbers in 'Thomas J Penello (1986). "Very fast LR
> parsing"', one need not worry much about the I-cache: He describes an
> experiment with a grammar with 126 productions, resulting in 192
> states, 305 terminal transitions and 226 nonterminal transitions in
> the LALR(1) parser; the resulting parse tables for the table-driven
> parser has 2022 bytes, while the recursive-ascent parser has 5602
> bytes (including 397 bytes in common with the table-driven parser).
> I-cache sizes these days are typically 32KB, so this parser would be
> completely within the I-cache (even if we assume a factor of two from
> going from 80286 code to AMD64 code), so one does not even need to
> think about locality. Penello also reports that a 158-production
> grammar for standard Pascal with a few minor extensions yielded a
> 5326-line assembly program (which MASM 3.0 failed to assemble); this
> should also easily fit into 32KB.


Karlsruhe team (Grosch and collaborators) had somewhat different
conclusions. When they added extra features needed for production
compiler they noted slowdowns on bigger grammars when translating
to code. OTOH they reported quite good results from table-driven
parsers. Reports were available online, but I do not have links
handy (quite possible that old links are broken now).


Of couse, modern machines tend to have larger caches than the
old ones. But also modern machines are troughput oriented
and my suffer more from latency.


--
                                                            Waldek Hebisch


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.