|Parser classifications? email@example.com (Alexander Morou) (2009-08-11)|
|Re: Parser classifications? firstname.lastname@example.org (Kenneth 'Bessarion' Boyd) (2009-08-13)|
|Re: Parser classifications? email@example.com (Alexander Morou) (2009-08-13)|
|Re: Parser classifications? firstname.lastname@example.org (Kenneth 'Bessarion' Boyd) (2009-08-16)|
|Re: Parser classifications? email@example.com (Alexander Morou) (2009-08-17)|
|From:||"Kenneth 'Bessarion' Boyd" <firstname.lastname@example.org>|
|Date:||Thu, 13 Aug 2009 09:55:03 -0700 (PDT)|
|Posted-Date:||13 Aug 2009 12:58:10 EDT|
On Aug 11, 6:58 pm, Alexander Morou <alexander.mo...@gmail.com> wrote:
> I think I've solved the major problem: finding a way to evaluate the
> grammar in a deterministic manner, that can be represented on a
> machine with finite limitations.
> I'm not very good at explaining things in a formal manner, so if I'm a
> bit cryptic, I apologize. So far, my goal has been to break a complex
> problem down into smaller steps. Here's an overview:
> 1. ....
> 2. Using the above deterministic model of the productions, define
> a lazily evaluated, also deterministic, view of the language from
> the start production. To determine the look ahead at any given
> point, the process is further divided into two phases:
> a. First evaluate the given state-trail used to reach a given
> production. Due to finite limitations, this trail only needs
> to be aware of the rules that are active in the current path
> (for follow set inclusion) and the individual states of that
> rule which can be used in place of that rule in the path (since
> they point back to it). The segmented view does not provide
> enough context information for parse graph production, but it
> does provide ample data to give the valid set of tokens at any
> given state of the language.
Looking ahead and seeing the performance concerns:
* May I assume this *is* being done incrementally, in a way that
calculates each step along the state-trail once?
* If not, do we have a strong memory conservation design requirement
that mandates this?
(I see this is being written in C#, so I'll hold off on commentary
specific to C++ .)
> Does anyone here know what class of grammar this will be by the time
> I'm done?
You're setting this up to support arbitrarily long lookahead. I
assume that if I'm wrong that this approach could support LL(infinity)
(aside from resource limitations), that an actual expert will correct
> I've tried reading other articles about LL-class parsers, but I can't
> seem to find a specific example of something similar. Perhaps I'm
> looking for the wrong thing. Knowing how to refer to the kind of
> parser used is helpful for discussion purposes.
> PPS: The above works a lot with paths and set construction what would
> a formal description of its function look like? ....
A formal description would be a state machine. I think your
difficulty in searching the literature is the same as mine:
* there is remarkably little concern about how to implement the state
* the untutored ways of improvising state machines are very different
from what is generated by tools like yacc.
So, for instance, while distinguishing where the rules are coming from
(1a and 1b) is important for hand-maintainability, it's not nearly so
important for a formal description and it actually complicates the
Likewise, working with paths is a specific way of using the
combinatoric graph used in formally defining the state machine. The
literature concentrates on how the mature automatic generators work
(e.g., rigging the control flow of the generated parser to to process
the initial conditions of rules with shared initial conditions only
once, and working well with an incoming token stream).
Return to the
Search the comp.compilers archives again.