|dynamic lex library package? firstname.lastname@example.org (1992-07-14)|
|Re: dynamic lex library package? email@example.com (1992-07-15)|
|Re: dynamic lex library package? firstname.lastname@example.org (1992-07-15)|
|From:||email@example.com (Per M. Bothner)|
|Organization:||Computer Science Department, Stanford University.|
|Date:||Tue, 14 Jul 1992 20:45:36 GMT|
I'm looking for a lex "library," similar to existing regex libraries, but
with the extra capabilities of lex (specifically, multiple regular
What I mean is something one could build a C++ class like:
Lexer mylex("pattern1\npattern2"); // For example
This creates the lexer 'mylex', with two rules.
mylex.scan(cin, ...); // where cin is an input file
This return 0 is pattern1 is matched, 1 if pattern 2 is matched, and -1 if
there is no match. The file is advanced by the number of characters
corresponding to the matched string, or 0 if there is no match. (This
assumes a file (streambuf) implementation that allows full backup - as the
libg++ version does.)
Application: Lexers created dyanmically (without compile-link cycle) can
be useful in interpretive languages. (A specific example is "expect.")
Also, the Lexer class could subsume many of the other things people use
regexes for (assuming reasonable efficiency for lexer generation and space
usage as well as matching).
Since flex is table-driven, I was thinking of hacking flex until it does
what I need. Is this something reasonable to do? Does anyone have
anything better? An alternative is to modify some regex package - since
*most* of the applications would just use one or a few regexes. Any
opinions on regex packages? I'm aware of GNU regex, agrep (not a librray
yet), as well as Henry Spencer's old regex.
Cygnus Support firstname.lastname@example.org email@example.com
[The full DFA conversion that lex does is a lot of work. It'd probably
be easier to fiddle with one of the regex packages that makes an NFA. -John]
Return to the
Search the comp.compilers archives again.