|advice on lexing/parsing split (novice) email@example.com (Scott Finnie) (1998-07-17)|
|Re: advice on lexing/parsing split (novice) firstname.lastname@example.org (Quinn Tyler Jackson) (1998-07-20)|
|From:||Scott Finnie <email@example.com>|
|Date:||17 Jul 1998 10:06:10 -0400|
|Keywords:||lex, parse, question|
Please forgive what is probably a very basic question...
We have a need to write parsers for a family of related files. All use
a similar representation, based loosely on the microsoft '.ini' file
format. Our thoughts at present are
1. Build a generic lexer capable of tokeninsing the input.
2. Build / extend parsers to handle the specific grammars of each
My questions are
1. Is this a sensible approach to take?
2. Assuming it is, any guidance on the level to pitch tokens at? The
two options we
(a) tokens are block delimiters (entries in square brackets,
e.g. [entry]) and
values (e.g. name-value pairs);
(b) tokens would be complete sections; i.e. the block delimiter
([entry]) and all
associated attribute values.
Any help (including pointers to relevant reference material) gratefully
PS: please copy any responses by email (firstname.lastname@example.org) - our news
server is a bit eratic.
Return to the
Search the comp.compilers archives again.