|Why separate Lexical & Parser Generators heronj@smtplink.NGC.COM (John Heron) (1994-10-05)|
|Re: Why separate Lexical & Parser Generators email@example.com (Anders Andersson) (1994-10-06)|
|Re: Why separate Lexical & Parser Generators firstname.lastname@example.org (1994-10-06)|
|Re: Why separate Lexical & Parser Generators email@example.com (1994-10-07)|
|Why separate the lexer and parser? firstname.lastname@example.org (Mark Hopkins) (1994-10-09)|
|Re: Why separate Lexical & Parser Generators email@example.com (John Lacey) (1994-10-10)|
|Re: Why separate Lexical & Parser Generator firstname.lastname@example.org (1994-10-10)|
|[12 later articles]|
|From:||John Heron <heronj@smtplink.NGC.COM>|
|Keywords:||lex, yacc, question, comment|
|Date:||Wed, 5 Oct 1994 01:30:49 GMT|
Pardon me if this question is naive. Why have a separate parser generator
and lexical analyzer generator? It seems to me that the generator could
recognize the regular portions of the grammar by starting at the terminal
symbols and working its way up until it sees a non-regular production.
After separating the grammar into 2 parts, one regular and one
context-free, you could proceed to build 2 separate FSM's. Are we still
using separate tools like yacc and lex just because they're widely
available, and widely understood? Or is there some other technical,
engineering, or business reason for doing things the traditional way?
***** John Heron, Network General Corp. - email - email@example.com
[The short answer is that they do different things. There are lots of places
where lex is useful without yacc, and some where yacc is useful without lex.
Also, if you think it'd be easier to write one big grammar, try writing the
BNF for a language, including indicating all the places where you can have
comments. Don't forget to make provisions for tracking line numbers, too.
Return to the
Search the comp.compilers archives again.