Related articles |
---|
[9 earlier articles] |
Re: LL(1) vs LALR(1) parsers jgj@ssd.hcsc.com (1995-11-28) |
Re: LL(1) vs LALR(1) parsers will@ccs.neu.edu (1995-11-28) |
Re: LL(1) vs LALR(1) parsers ddean@dynastar.cs.princeton.edu (1995-11-28) |
Re: LL(1) vs LALR(1) parsers napi@ms.mimos.my (1995-11-28) |
Re: LL(1) vs LALR(1) parsers ok@cs.rmit.edu.au (1995-11-29) |
Re: LL(1) vs LALR(1) parsers mparks@oz.net (1995-11-29) |
Re: LL(1) vs LALR(1) parsers jmccarty@spdmail.spd.dsccc.com (1995-11-29) |
Re: LL(1) vs LALR(1) parsers pardo@cs.washington.edu (1995-11-29) |
Re: LL(1) vs LALR(1) parsers CSPT@giraffe.ru.ac.za (Pat Terry) (1995-11-30) |
Re: LL(1) vs LALR(1) parsers gvcormac@plg.uwaterloo.ca (Gord Cormack) (1995-12-01) |
Re: LL(1) vs LALR(1) parsers bridges@cs.arizona.edu (1995-12-01) |
Re: LL(1) vs LALR(1) parsers mparks@oz.net (1995-12-09) |
Re: LL(1) vs LALR(1) parsers maatwerk@euronet.nl (1995-12-09) |
[6 later articles] |
Newsgroups: | comp.compilers |
From: | jmccarty@spdmail.spd.dsccc.com (Mike McCarty) |
Keywords: | parse, LALR, LL(1) |
Organization: | DSC Communications Corporation, Plano, Texas USA |
References: | 95-11-051 95-11-138 95-11-195 |
Date: | Wed, 29 Nov 1995 17:04:25 GMT |
Bill Leonard <Bill.Leonard@mail.hcsc.com> wrote:
First, I really don't want to disagree with your position so much as
perhaps temper your willingness to stand quite so far over.
)"steve (s.s.) simmons" <simmons@bnr.ca> writes:
)> Intellectual bigotry!!!!!
)>
)> That is, the automated parsers use a great deal of automata theory
)> which helps build on the computer science foundation. Recursive descent
)> parsers are a small matter of programming (SMOP). I do notice among
)> peers in industry that people are no longer snubbing the idea of writing
)> a recursive parser when it is appropiate.
)
)I've noticed that, too, and I really wonder why. At the same time that
)we're trying desperately to develop tools that do the programming for you
)in other areas of software development, and we already have tools (e.g.,
)yacc) that can program a parser for you, why are we regressing?
I'm not so sure that you should call this "regressing".
First, programs have to be maintained. I work in industry
(telecommunications), and find that most of the "programmers" are EEs.
They don't understand LEX and YACC. They usually do understand
recursive descent. I ripped out a shift reduce parser from our debugger
when I took it over. It had several defects in it because it had been
maintained by several programmers who really didn't completely
understand how it worked. It also had no precedence built in. I
replaced it with a recursive descent parser which has worked perfectly
since, although being maintained by several programmers since. It was
also smaller in actual code size.
Second, there =are= some automated tools around which generate recursive
descent parsers. I like (for myself) using tools which are appropriate
for the job. Bottom-up is not inherently better than top-down. (If we
are talking about technique.)
Third, hand-crafted parsers have been faster (in my experience).
Especially LEX (I know, not a parser) generates slower, bigger output
than a hand-crafter lexical scanner, and I can usually turn out a
lexical scanner in about the same amount of time doing it either way.
)I don't think it is intellectual bigotry (to use automated parsers vs.
)hand-crafted ones), I think it's just common sense. We have automated
)means of generating parsers, why not use them?
Sure, when appropriate. But I work at a company where it is not our
"job" to write parsers. We have experts in telecomm, not parsing. But we
do need special debuggers etc. with command interfaces. These things
need to be maintained by non-parsing-experts.
)If I were in charge of a compiler project (and I have been), I'd much
)rather devote engineering resources to making the generated code better or
)the compiler faster or any of a number of other improvements than in coding
)a parser.
I've been there, too. And I agree with you on this point. But I have
seen hand-generated recursive descent parsers outdo YACC for speed. I
have also seen them outdo precedence driven shift/reduce type expression
analyzers.
)For certain compilers, there may be good reasons not to use an automated
)parser, but in general I'd recommend using a parser generator. Why
)reinvent the wheel?
)
)Mr. Simmons says recursive descent parsers are a "small matter of
)programming". How small? If it takes more than a couple of hours to
)write, it's too much, because it will also take up time later on for others
)to learn and fix. All projects have a fixed amount of programmer
)resources, so any time spent on a parser is that much less spent elsewhere.
This particular argument cuts both ways. It takes time for others to
learn, understand, and fix YACC input. And if it's not "what you do" to
parse, then you will not have parsing experts around to understand what
the parser is and how it does its work.
)In my opinion, parser generators have (at least) the following advantages
)over hand-crafted parsers:
)
) 1. It is easier to find the semantic action code -- it isn't buried
) amongst the parsing. This makes for easier maintenance.
I'm afraid you lost me there. I find YACC style input definitely and
distinctly difficult to read, as it intermixes parsing and semantic
actions. Usually, if the semantic action is more than a line or two, I
just put it in a routine with a name like "do_xxx". I think that we
don't have -any- tools right now which sufficiently separate the
syntactic parts of language processing fromt the semantic part. I wish
we did.
) 2. It is easier to change the language. Even standardized languages
) change occasionally, but more importantly you may find that your
) interpretation of the language is incorrect. If you're dealing with a
) language like Fortran, you'll also find that you're constantly asked
) to add extensions to the language. (I suspect C++ will also
) experience this phenomenon for years after the language is
) standardized.)
By whom? (is it easier to change)
)I've heard many people say that hand-crafted parsers do better at error
)recovery. That may be true if you're comparing to yacc, but there are
)other parser generators that do better at error recovery, many of which
)have features for tuning the error recovery. Furthermore, hand-crafted
)parsers only do better if you spend a considerable amount of time tweaking
)it. If you spend a comparable amount of time tuning your generated parser,
)I bet you'd get pretty close to the same quality.
Error recovery is always difficult. I like the exception handling of
ADA for this kind of thing. But I agree that it is difficult, and I
don't think that the tools we have today are really adequate, not any
of them.
)Besides that, I've seen cases where a project chose a hand-crafted parser
)because the error recovery would allegedly be better, only to end up
)spending so much time just getting the parser right that there was no time
)to implement good error recovery! Conversely, you can spend so much time
)on the error recovery that the generated code stinks!
)
)> Remember you don't take a compiler course to learn how to build a
)> compiler, surprise... Most people never write a compiler. You take
)> it for the following reasons:
)>
)> - Improving your comp. sci. background to understand
)> automata theory with a very good application.
)
)Many curricula separate automata theory from compilers, and in those cases
)the compiler course assumes you already know automata theory.
I have believed for many years that the bachelors level degree only
confers two things:
vocabulary
a certain level of mechanical expertise in handling that
vocabulary
Graduate level degrees only confer two more things
facility in researching
ability to explore
Real understanding comes from:
doing real work
teaching others how to do real work
I learned more calculus from my students when I was a graduate student
than I did from my professor when I took it.
Mike
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.