Related articles |
---|
Speeding up LEX scanning times pahint@eunet.be (1995-02-02) |
Re: Speeding up LEX scanning times dimock@das.harvard.edu (1995-02-02) |
Re: Speeding up LEX scanning times c1veeru@WATSON.IBM.COM (Virendra K. Mehta) (1995-02-02) |
Re: Speeding up LEX scanning times monnier@di.epfl.ch (Stefan Monnier) (1995-02-03) |
Re: Speeding up LEX scanning times mercier@hollywood.cinenet.net (1995-02-03) |
Re: Speeding up LEX scanning times vern@daffy.ee.lbl.gov (1995-02-04) |
Re: Speeding up LEX scanning times eifrig@beanworld.cs.jhu.edu (1995-02-07) |
Newsgroups: | comp.compilers |
From: | dimock@das.harvard.edu (Allyn Dimock) |
Keywords: | lex, performance |
Organization: | Aiken Computation Lab, Harvard University |
References: | 95-02-010 |
Date: | Thu, 2 Feb 1995 14:44:57 GMT |
It is my impresion that any simple compiler is going to spend a
substantial percentage of its time in the lexical analysis phase: Lets
face it, (readable) source text is not a very compact representation, and
the lexer has to make several decisions per character.
Your 80% sounds high, the number that was quoted to me as standard was
">25%" Many versions of the utility lex allow you a choice of several
algorithms trading off table space vs time. You might try requesting a
different algorithm. It is my impression that flex gives you quite a
number of algorithms to choose from.
--
Allyn Dimock dimock@das.harvard.edu
Aiken Comp. Lab. #111 w (617) 495-3998
33 Oxford St. h (617) 451-4565
Cambridge, Ma. 02138 249 A St. #63 / Boston, Ma. 02210
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.