Related articles |
---|
Regular expressions in lexing and parsing ed_davis2@yahoo.com.dmarc.email (Ed Davis) (2019-05-17) |
Regular expressions in lexing and parsing jamin.hanson@googlemail.com (Ben Hanson) (2019-05-18) |
Re: Regular expressions in lexing and parsing DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2019-05-21) |
Re: Regular expressions in lexing and parsing drikosev@gmail.com (Ev. Drikos) (2019-05-23) |
Re: Regular expressions in lexing and parsing christopher.f.clark@compiler-resources.com (Christopher F Clark) (2019-06-17) |
Re: Regular expressions in lexing and parsing quinn.jackson@ieee.org (Quinn Jackson) (2019-06-18) |
Re: Regular expressions in lexing and parsing quinn.jackson@ieee.org (Quinn Jackson) (2019-06-18) |
Re: Regular expressions in lexing and parsing 847-115-0292@kylheku.com (Kaz Kylheku) (2019-06-18) |
Re: Regular expressions in lexing and parsing christopher.f.clark@compiler-resources.com (Christopher F Clark) (2019-06-18) |
From: | Quinn Jackson <quinn.jackson@ieee.org> |
Newsgroups: | comp.compilers |
Date: | Tue, 18 Jun 2019 11:56:11 -0700 |
Organization: | Compilers Central |
References: | 19-06-005 |
Injection-Info: | gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="3085"; mail-complaints-to="abuse@iecc.com" |
Keywords: | lex, parse, comment |
Posted-Date: | 18 Jun 2019 15:23:31 EDT |
In-Reply-To: | 19-06-005 |
On Tue, Jun 18, 2019 at 10:09 AM Christopher F Clark
<christopher.f.clark@compiler-resources.com> wrote:
>
> Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
>
> > I think that regular expressions are primarily usable in *formal* token
> > grammars. In contrast to parsers I'm missing *complete* formal grammars
> > for the tokens, including whitespace, comments, strings and other
> > special language elements, and how these fit together.
To which Christopher F Clark
<christopher.f.clark@compiler-resources.com> responded in part:
> I definitely would not discount the use of formal grammars both for
> lexing (i.e. formal regexes) and parsing and the use of tools.
> When we did Yacc++, we specifically made our lexer syntax look like
> our parser syntax (and used LR yacc-style productions extended with
> regular expressions on the right hand side) to make the notation
> formal, regular, and simple. I would call that one of our successes.
When it comes to adaptive grammars, since the terminal and
non-terminal sets can have items added and removed dynamically during
a parse, it makes a lot of sense to treat lexing and parsing as one
and the same thing. I would add that even AST traversal and decoration
can be included in this family, in practice. Treating them
orthogonally is, in my experience, less useful overall. It also makes
it much simpler in practice to read such grammars at their level of
intent.
Pertinent to Hans-Peter's quote above: right down to the whitespace is
a good thing. Especially when parsing something other than a recent
computer language, such as -- oh, I don't know -- English or some
other Natural Language. (Outside the scope of the margins of this
list, however.)
> So, if someone were to seriously argue against formal grammars (and
> formal lexing specs too), I would challenge them to prove their
> contentions.
I echo Chris's challenge.
> I will let someone else have the soapbox now....
Be careful what you wish for. ;-)
--
Quinn Jackson CSci MIScT SMACM SMIEEE FRSA
LinkedIn: http://ca.linkedin.com/in/quinnjackson/
ResearchGate: http://researchgate.net/profile/Quinn_Jackson/
[I agree with the sentiment to use formal grammars for both lex and
parse, but it always has seemed to me that running them separately as
coroutines makes it easier to deal with comments and whitespace
without having to hang them on every token definition. I realize that
the more context sensitive the language, the more it is likely to make
sense to combine them. -John]
Return to the
comp.compilers page.
Search the
comp.compilers archives again.