|DFA transition table reduction by alphabet reduction. email@example.com (Christian Riis) (2003-02-21)|
|Re: DFA transition table reduction by alphabet reduction. firstname.lastname@example.org (Clint Olsen) (2003-02-24)|
|Re: DFA transition table reduction by alphabet reduction. email@example.com (Christian Riis) (2003-03-09)|
|Re: DFA transition table reduction by alphabet reduction. firstname.lastname@example.org (Glen Herrmannsfeldt) (2003-03-14)|
|From:||"Glen Herrmannsfeldt" <email@example.com>|
|Date:||14 Mar 2003 11:38:12 -0500|
|References:||03-02-106 03-02-138 03-03-016|
|Posted-Date:||14 Mar 2003 11:38:12 EST|
"Christian Riis" <firstname.lastname@example.org> wrote in message
(snip on rewriting DFA's with a smaller alphabet and more states)
> Yes well, I'm writing a project about the method I have described
> above. As such I don't care much how well it performs but in the
> theoretical "virtues" of the method I have selected. It could
> revolutionize the world of lexer generation og suck immensely and it
> would all be the same to me, although I would prefer the former
> alternative. :) As far as I can see re2c bears only a slight
> resemblence to what I'm interested in (i.e. the part about range
> tests). It appears to me to be a relatively straightforward lexer
> generator whose strength lies in generality.
It will take twice as many state transitions to get anywhere, but the
table might be smaller.
I once was writing DFA for an alphabet size of four. I considered
increasing that to a power of four so that transitions would be
faster. The table grows too fast, though.
I did once run a DFA with a table of nearly 1GB (on a 2GB machine).
Four byte pointers and four per state, so about 60M states. With no
locality the processor cache didn't help at all, so it ran at memory
I would say it should be done when memory is more expensive than processor
Return to the
Search the comp.compilers archives again.