|Algorithm(s) to convert textual regular expressions to a transition ta email@example.com (Costello, Roger L.) (2015-01-20)|
|Re: Algorithm(s) to convert textual regular expressions to a transitio firstname.lastname@example.org (ioan) (2015-01-21)|
|Re: Algorithm(s) to convert textual regular expressions to a transitio email@example.com (2015-01-21)|
|Re: Algorithm(s) to convert textual regular expressions to a transitio firstname.lastname@example.org (Kaz Kylheku) (2015-01-22)|
|From:||"Costello, Roger L." <email@example.com>|
|Date:||Tue, 20 Jan 2015 16:08:06 +0000|
|Keywords:||lex, question, comment|
|Posted-Date:||20 Jan 2015 13:27:56 EST|
I have read the section in Modern Compiler Design that discusses lexical
analysis. I understand the subset algorithm it describes (neat algorithm!).
With pencil and paper I can take a set of regular expressions and follow the
subset algorithm to generate a transition table. But writing actual code to do
this conversion will require more algorithms, I think. Is there an algorithm
that describes how to read in a set of strings that represent regular
expressions and create data structures that are well-suited to processing by
the subset algorithm? More broadly, what are the set of algorithms needed to
go from a set of strings that represent regular expressions to the precomputed
transition table? Can you refer me to a book or article that does a good job
in describing this?
[Unless I misunderstand your question, this is exactly what lexer generators
like lex and flex do. You give it a set of regular expressions, it generates
tables that their state machine uses to match the RE's. -John]
Return to the
Search the comp.compilers archives again.