Re: EBNF

vbdis@aol.com (VBDis)
1 Dec 2004 23:17:56 -0500

          From comp.compilers

Related articles
EBNF vbdis@aol.com (2004-11-20)
Re: EBNF nkavv@skiathos.physics.auth.gr (2004-11-28)
Re: EBNF martin@cs.uu.nl (Martin Bravenboer) (2004-11-28)
Re: EBNF vbdis@aol.com (2004-12-01)
Re: EBNF henry@spsystems.net (2004-12-11)
Re: EBNF vidar@hokstad.name (Vidar Hokstad) (2004-12-16)
Re: EBNF cfc@shell01.TheWorld.com (Chris F Clark) (2004-12-17)
| List of all articles for this month |
From: vbdis@aol.com (VBDis)
Newsgroups: comp.compilers
Date: 1 Dec 2004 23:17:56 -0500
Organization: AOL Bertelsmann Online GmbH & Co. KG http://www.germany.aol.com
References: 04-11-111
Keywords: parse
Posted-Date: 01 Dec 2004 23:17:56 EST

Martin Bravenboer <martin@cs.uu.nl> schreibt:


>This separation is inspired by performance concerns: by separating
>the input in tokens, the number of elements the parser has to
>consider is reduced.


The same optimization could work in the opposite direction, where the
parser could instruct the lexer which tokens are expected in every
situation. This only will not normally result in an improvement, since
a lexer typically is very efficient. Even a limitation on the
recognition of specific (actually expected) keywords will not speed up
the lexer very much.


A scannerless parser instead may require many tries in the
"tokenization" part, unless equivalents of scanner automatons are
created for all ambiguous situations. I think that even a scannerless
parser would profit from an integrated scanner for fixed tokens, and
should switch to flexible input recognition only when suggested or
inevitable?


DoDi


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.