Summary: NFA to DFA, minimize DFA

roberto@cernvax.cern.ch (roberto bagnara)
14 Jan 92 08:29:19 GMT

          From comp.compilers

Related articles
Summary: NFA to DFA, minimize DFA roberto@cernvax.cern.ch (1992-01-14)
Re: Summary: NFA to DFA, minimize DFA brnstnd@KRAMDEN.ACF.NYU.EDU (1992-01-17)
| List of all articles for this month |
Newsgroups: comp.compilers
From: roberto@cernvax.cern.ch (roberto bagnara)
Keywords: NFA, DFA, lex, summary
Organization: CERN, Geneva, Switzerland
Date: 14 Jan 92 08:29:19 GMT

As requested I post a summary of the answers I got:


BEGIN OF SUMMARY
-----------------------------------------------------------------------------
Reply-To: pardo@cs.washington.edu


Both egrep and GNUgrep convert the NFA to a DFA lazily (incrementally).
Furthermore, a cache of DFA states is used, so if the memory gets ``too
large'' the cache is simply flushed and translation is restarted.


The idea is that at any given time you'll only be using a small number
of the potentially exponential number of states in the DFA. The inner
loop looks something like:


input_character = getchar();
next_state = curr->transition[input_character];
if (next_state == NEEDED) {
curr = build_and_install (dfa, curr, input_character, nfa);
} else if (next_state == FINAL) {
...
} else {
curr = next_state;
}


I don't know how the actual translation is done ("shouldn't" be hard),
see the source code, I suppose.


-----------------------------------------------------------------------------
Reply-To: Tony Zamora <azamora@hexagon.cs.indiana.edu>


I recommend not converting the NFA to a DFA. Instead, I would use an
algorithm that simulates the NFA directly. There is a reasonably good
algorithm for doing this in The Design and Analysis of Computer Algorithms
by Aho and Hopcroft (or Ullman, I forget). It probably can be tuned to run
about as fast as an algorithm that simulates a DFA. This way, you can
avoid the conversion step altogether, and you don't have to worry about the
possibility of the exponential explosion of states the NFA.




-----------------------------------------------------------------------------
Reply-To: Jorma Kuha <JKUHA@CC.HELSINKI.FI>


I think you might find the article 'Grosch J.:"Efficient Generation of
Lexical Analysers", Software - Practice & Experience, vol 19, no. 11
(november 1989)" helpful. He introduces a sophisticated 'tunnel
automation' technique, which makes scanners generated by his Rex four
times smaller and five times faster than those generated by Lex.


-----------------------------------------------------------------------------
Reply-To: zmola@rainbow.uchicago.edu (Carl Zmola)


        To my knowledge this will be pretty tough, and if I were to guess I
would say it is NP-Complete. What ever algorithm is used, it cannot be
polynomial, or else you would prove that P == NP ( a pretty good result ).
If you find a solution let me know.:)


        Actually, for your specific case, you might be able to do some
optimazation, but the general algorithm will be very hard.


-----------------------------------------------------------------------------
Reply-To: d90-awe@nada.kth.se


I also implemented a program from the algoritms in the Dragon book. My
program is simple yet another lex clone. It's almost completed (it doesn't
support start states, yet :-) ). I've been doing some profiling and
comparing with lex and flex and my programs seems to be between 10 and 20
times slower than flex (which is almost as fast as lex on my system and
with the samples I've been using). Well, that's a "optimized" version of
my program, ok it's not complete optimum nor particular fast, but I've
been doing some profiling and tuning. (At the beginning it was some 400
times slower! ). About the memory, my program uses about the same amount
of memory as flex.


Concerning theory, I don't know any better algoritm for converting a NFA
to a DFA. I looked at the flex source to see if they used some other
algoritm, but as far as I can tell from a quick look, they use the same
algoritm.


-----------------------------------------------------------------------------
Reply-To: Mikko Tiusanen <mikko@uicbert.eecs.uic.edu>


The first part is potentially exponential in complexity: the simple method
of the so called subset construction (probably the one you have read) is a
good choice. For a faster minimization algorithm for LARGE DFA, see
Aho-Hopcroft-Ullman: The Design and Analysis of Computer Algorithms,
1974(?) under the topic finding the coarsest partition compatible with a
function. This results in an n log n -algorithm instead of an n^2 one.
Also have a look at some EATCS Bulletins from two years back: there was
some discussio on the subject there.


-----------------------------------------------------------------------------
Reply-To: hankd@ecn.purdue.edu (Hank Dietz)


Look at PCCTS; in particular at the source code for DLG. It doesn't do
all the minimization possible, but then I don't think A,S,&U do either.


We had some trouble with PCCTS distribution in the past, but now you can
get everything via an email server. Just email to pccts@ecn.purdue.edu.


-----------------------------------------------------------------------------
Reply-To: Josef Grosch <grosch@karlsruhe.gmd.de>


Why don't you give Rex a try. Rex is a scanner generator like Lex and Flex
that contains among others the algorithms you are looking for. I am quite
satisfied with the performance. It is especially fast for so-called
constant regular expression (see Reference 4). Rex is part of the GMD
toolbox for compiler construction. You can get Rex via ftp (see below).




TOOLBOX(1) GMD-Forschungsstelle-Karlsruhe TOOLBOX(1)




NAME
          toolbox - tool box for compiler construction


DESCRIPTION
          Toolbox is a set of program generators or compiler
          construction tools for nearly all phases of a compiler. The
          compiler construction tools support the automatic generation
          of compilers for imperative programming languages. The
          design goals for this tool box were practical usability,
          significantly reduced construction effort for compilers, and
          high quality of the generated compilers. Especially with
          respect to efficiency the tools are competitive to
          programming by hand. Currently the tools can generate
          compiler modules in the target languages C and Modula-2.
          First realistic applications demonstrate the excellent
          performance of the tools and show that the tools allow the
          construction of production quality compilers.


TOOLS
          Rex generator for lexical analyzers
          ...


IMPLEMENTATION LANGUAGES
          Modula-2 or C


TARGET LANGUAGES
          Modula-2 or C


PLATFORMS
          DEC Station / ULTRIX
          VAX / ULTRIX or BSD UNIX 4.2
          SUN 3 or SUN 4 / SunOS
          PCS Cadmus / MUNIX
          others


FILESERVER
          rusmv1.rus.uni-stuttgart.de = 129.69.1.12
          directory: /soft/unixtools/compilerbau


DISTRIBUTION
          Medium: DC 300 A data cartridge, TK 50, or Exabyte in tar format


          source programs in Modula-2 as well as in C
          binaries for SUN 3
          documentation in troff- und Postscript-format
          example specifications


CONTACT
          J. Grosch, H. Emmelmann
          GMD Forschungsstelle an der Universitaet Karlsruhe
          Vincenz-Priesznitz-Str. 1
          D-7500 Karlsruhe
          Germany
          Tel: +721-6622-26/15
          E-Mail: grosch@karlsruhe.gmd.de


PRICE
          Source licence excluding Beg: 500 DM or 250 US $
          Source licence for Beg : 500 DM or 250 US $






          Rex, Lalr, Ell, Ast, and Ag - Compiler Construction Tools
          =========================================================


          Rex (Regular EXpression tool) is a scanner generator whose
specifications are based on regular expressions and arbitrary
semantic actions written in one of the target languages C or
Modula-2. As scanners sometimes have to consider the context to
unambiguously recognize a token the right context can be speci-
fied by an additional regular expression and the left context can
be handled by so-called start states. The generated scanners
automatically compute the line and column position of the tokens
and offer an efficient mechanism to normalize identifiers and
keywords to upper or lower case letters. The scanners are table-
driven and run at a speed of 180,000 to 195,000 lines per minute
on a MC 68020 processor.


          ...




A comparison of the above tools with the corresponding UNIX tools shows
that significant improvements in terms of error handling as well as
efficiency have been achieved: Rex generated scanners are 4 times faster
than those of LEX. Lalr generated parsers are 2-3 times faster than those
of YACC. Ell generated parsers are 4 times faster than those of YACC.
The input languages of the tools are improvements of the LEX and YACC
inputs. The tools also understand LEX and YACC syntax with the help of
preprocessors. They have been tested by generating scanners and parsers
for e. g. Pascal, Modula, Oberon, Ada and found stable.


The tools have been developed using our own Modula-2 compiler called MOCKA
on a MC 68020 based UNIX workstation. They have been ported to the SUN
workstation and been compiled successfully using the SUN Modula-2
compiler. They also run on VAX/BSD UNIX and VAX/ULTRIX machines. This
should assure a reasonable level of portability.


References:


1. J. Grosch, `Generators for High-Speed Front-Ends', LNCS,
          371, 81-92 (Oct. 1988), Springer Verlag.


2. H. Emmelmann, F. W. Schroeer, Rudolf Landwehr, ` BEG - a Generator
          for Efficient Back Ends', Sigplan Notices, 24, 227-237 (Jul. 1989)


3. W. M. Waite, J. Grosch and F. W. Schroeer, `Three Compiler
          Specifications', GMD-Studie Nr. 166, GMD Forschungsstelle an
          der Universitaet Karlsruhe, Aug. 1989.


4. J. Grosch, `Efficient Generation of Lexical Analysers',
          Software-Practice & Experience, 19, 1089-1103 (Nov. 1989).


5. J. Grosch, `Efficient and Comfortable Error Recovery in Recursive
          Descent Parsers', Structured Programming, 11, 129-140 (1990).


6. J. Grosch, H. Emmelmann, `A Tool Box for Compiler Construction',
          LNCS, 477, 106-116 (Oct. 1990), Springer Verlag.


7. J. Grosch, `Object-Oriented Attribute Grammars', in: Proceedings of the
          Fifth International Symposium on Computer and Information Sciences
          (ISCIS V) (Eds. A. E. Harmanci, E. Gelenbe), Cappadocia, Nevsehir,
          Turkey, 807-816, (Oct. 1990).


8. J. Grosch, `Lalr - a Generator for Efficient Parsers',
          Software-Practice & Experience, 20, 1115-1135 (Nov. 1990).


9. J. Grosch, `Tool Support for Data Structures',
          Structured Programming, 12, 31-38 (1991).


-----------------------------------------------------------------------------
Reply-To: satishc@microsoft.COM


Could you give some idea about what you mean by slow and memory expensive.
I wrote an ada version of lex when I was at the University of
Massachusetts. If speed is not a problem, you can run a very tight (space
wise) version. But for us speed was more of a problem than space,
especially since we were using ada. The initial version was coded using
pointers instead or arrays. Changing the implementation to use arrays
dropped the time to half. Then I optimized the code for collecting and
comparing the state sets. I basically gave up some space for a
considerable improvement in speed. But for me that was very easy to
achieve because I basically worked around Ada's runtime checks etc. So I
am not quite sure how much of a speedup you would get in C (I assume you
coded it in C). I am giving a description of what I did to get the speedup
below. I would send you the code, but I don't have a copy of it right now.
Will take me a week or so to get it (It is on tape at UMASS, I think).


If you profile your code and let me know where you are spending most of
your time, I would be happy to discuss it further and give you an idea of
what might help.


Collecting the NFA state sets was pretty fast. In the intial version, once
I discarded the unimportant states i.e. the epsilon states, I sorted the
rest of the states and removed duplicates. Obviously this is O(n log n).
But the thing I realized subsequently was that I only needed to remove
duplicates and didn't need to sort the state set PROVIDED I could compare
unsorted lists as fast as sorted lists. This is where I traded space for
time. I established a Mark array as large as the NFA state set. When
removing duplicates, the core algorithm I followed was


for each i in Collected_Set
if not Mark(i) then
Mark(i) = true
Add i to Optimized_Set
end if
end for


But this still leaves the problem of comparing to Optimized sets in linear
time. So I established a monotonically increasing counter that was
incremented each time the compare routine was called.


function Compare ( Set1, Set2 ) return boolean
        Counter++;
        if Size(Set1) = Size(Set2)
                for each i in Set1
Mark(i) = Counter;
end for
for each i in Set2
if Mark(i) <> Counter
return FALSE
end if
end for
return TRUE
        else
return FALSE
        end if
end Compare


Since the counter was monotonically increasing, the compares could be done
in linear time and I was saved the bother of reinitializing the Mark array
for each compare (initializing an array of 5000 elements in ada was not
that cheap). I was using a 16 bit signed integer for the count mainly
because the mark array elements had to be that size and I didn't want to
waste space. So I could only do about 32768 compares before I ran out of
counter space. But an empirical estimate suggested that I would run out
counter space only if the DFA was of the order of 2000 states or more,
which in general is not true.


I don't know if you wanted info this detailed, but this is my experience
in implementing the dragon book's algorithm. I would be happy to tell you
more if you have any questions.




-----------------------------------------------------------------------------
Reply-To: Sven-Olof Nystr|m <svenolof@csd.uu.se>


This may not be obvious, but algorithm 4.8 in the dragon book is an
efficient (although specialized) algorithm for NFA to DFA conversion.
Maybe this is what you need.


-----------------------------------------------------------------------------
END OF SUMMARY


Thanks to every one.
Roberto
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.