Re: bison and/or antlr ?

Chris F Clark <>
Sat, 07 Jul 2007 16:09:27 -0400

          From comp.compilers

Related articles
bison and/or antlr ? (SomeDeveloper) (2007-06-30)
Re: bison and/or antlr ? (George Neuner) (2007-06-30)
Re: bison and/or antlr ? (Ron AF Greve) (2007-07-01)
Re: bison and/or antlr ? (Chris F Clark) (2007-07-02)
Re: bison and/or antlr ? (Tom Copeland) (2007-07-03)
Re: bison and/or antlr ? (George Neuner) (2007-07-04)
Re: bison and/or antlr ? (Chris F Clark) (2007-07-07)
Re: bison and/or antlr ? (George Neuner) (2007-07-09)
Re: bison and/or antlr ? (Chris F Clark) (2007-07-13)
Re: bison and/or antlr ? (George Neuner) (2007-07-16)
| List of all articles for this month |

From: Chris F Clark <>
Newsgroups: comp.compilers
Date: Sat, 07 Jul 2007 16:09:27 -0400
Organization: The World Public Access UNIX, Brookline, MA
References: 07-06-071 07-07-008 07-07-013
Keywords: tools
Posted-Date: 08 Jul 2007 19:01:53 EDT

George Neuner <> writes:

>>SomeDeveloper <> writes:
>>> 1. If I could learn / master ONLY ONE of these two tools, which one
>>> should I?
>>If you want to just learn a tool (especially just at the pragmatic
>>level), learn ANTLR first.
>>> 2. If I could learn / master BOTH, which one should I pick first?
>>Learing antlr (recursive descent parsing) is considerably easier.
>>Learn it first.
> Given some of our go-rounds in the past, I would have expected you to
> advocate the opposite - learning Bison - because LR is more
> educational and because Bison is a standard tool if for no other
> reason.
> I've been of the opinion that predicated LL(k) is "good enough" for
> most uses for quite a long time - certainly for any reasonable
> programming language grammar. I would still choose an LR tool for
> really complex, ambiguous grammars, or for small targets where state
> machines would produce tighter code for the same grammar complexity (I
> used to work in embedded systems).
> Have you finally decided that predicated LL(k) is worth-while or are
> you just bowing to pragmatics ... any tool is better than people
> rolling their own?

Yes, I suppose that's a surprising opinion coming from me, a die-hard
LR fan. I still think that overall LR is a better technique for
serious parsing problems. And, if one were designing an expression
processor, I would advocate LR + precedence. However, I'm also of the
opinion that most language designs should strive to be LL, when
possible, and when it isn't possible, one should seriously ask
themselves why not, and what would I have to give up to make it
possible. Moreover, I agree that using an LL tool to generate a
recursive descent parser, is probably the simplest methodology to
learn and most accessible for a beginner, and certainly adequate for
most one-off problems.

You did catch correctly my one sticking point. I have no respect for
hand-written recursive descent. And, it's my one problem with machine
generated recursive descent, as some fool may later come along and
hack the output rather than modifying the grammar. However, that one
flaw doesn't detract from tool generated recursive descent LL parsing.
It is still the most pedagogically simple approach. In fact, if I
ever were to teach parsing, I would start there. I would, of course,
continue on to cover LR parsing (and NFA to DFA conversion,
emphasizing that they are essentially the same idea), because I think
those are algorithms any competent person should know. Still, the
essence of parsing is captured well in recursive descent.

There is a related question which you didn't ask and I started to
answer above. This is how I would apply technologies to solve a
parsing problem.

                LL parsing is the simplest. (True regular expressions are
                simpler, but regular expressions plus back-references are
                not.) Almost all of one's grammar should be LL (or LL plus
                pure regular expressions). The main place where LL parsing
                can fall down is in expressions. I find the rules that create
                precedence hierarchies to make parsing more complex and not

                SLR and LALR parsing is next up the ladder. There is rarely
                any need to use these techniques by themselves.

                SLR or LALR parsing with precedence (and associativity)
                declarations is the next step. This is the sweet spot for
                grammars with complicated expression rules. However, at this
                level one has ascended the ladder and gotten on a trapeze. It
                is easy to write precedence declarations that simply suppress
                legitimate warnings and have them apply to cases you don't
                want. Thus, one wants to know what one is doing and be
                cautious at this stage.

                Predicated (and boolean) grammars are the next step. If you
                need this facility, one should carefully ask why and whether
                the resulting language is actually well-defined. This is
                particularly true, if one is using a PEG tool that doesn't
                check your grammar for consistency. Essentially, at this
                level one is operating without a net. I would put hacking the
                output of a RD parser generator (or doing RD by hand) at this

                Ambiguous grammars (e.g. GLR or Earley parsing) are a step
                beyond that. You have clearly let go of the trapeze bar and
                are expecting something else to catch you. You need to be
                confident in that something else.

Does that satisfy your curiosity?

Chris Clark Internet :
Compiler Resources, Inc. Web Site :
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.