Re: Q2. Why do you split a monolitic grammar into the lexing and parsing rules?

"Ron Pinkas" <Ron@xharbour.com>
28 Feb 2005 00:48:51 -0500

          From comp.compilers

Related articles
Q2. Why do you split a monolitic grammar into the lexing and parsing r spam@abelectron.com (valentin tihomirov) (2005-02-20)
Re: Q2. Why do you split a monolitic grammar into the lexing and parsi vidar@hokstad.name (Vidar Hokstad) (2005-02-28)
Re: Q2. Why do you split a monolitic grammar into the lexing and parsi Ron@xharbour.com (Ron Pinkas) (2005-02-28)
Re: Q2. Why do you split a monolitic grammar into the lexing and parsi mefrill@yandex.ru (2005-02-28)
Re: Q2. Why do you split a monolitic grammar into the lexing and parsi ndrez@att.net (Norm Dresner) (2005-02-28)
Re: Q2. Why do you split a monolitic grammar into the lexing and parsi rh26@humboldt.edu (Roy Haddad) (2005-03-04)
| List of all articles for this month |

From: "Ron Pinkas" <Ron@xharbour.com>
Newsgroups: comp.compilers
Date: 28 Feb 2005 00:48:51 -0500
Organization: xHarbour.com Inc.
References: 05-02-087
Keywords: parse
Posted-Date: 28 Feb 2005 00:48:51 EST

> So, I do not understand why do we need the artificial obstacle, the
> 2nd level?


SimpLex (http://sourceforge.net/projects/simplex) is a generic, open source,
lexical engine, which does not have such restriction. It allows context
rules to differentiate between reserved words, as reserved words, vs.
reserved words as Identifiers. For example, in the xHarbour compiler
(http://www.xHarbour.org) it replaced a Flex based scanner, specifically to
eliminate such restriction. Interestingly it produced a faster scanner,
which is about 1/4 the size of the original Flex generated scanner.


Ron


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.