|Speeding up LEX scanning times email@example.com (1995-02-02)|
|Re: Speeding up LEX scanning times firstname.lastname@example.org (1995-02-02)|
|Re: Speeding up LEX scanning times c1veeru@WATSON.IBM.COM (Virendra K. Mehta) (1995-02-02)|
|Re: Speeding up LEX scanning times email@example.com (Stefan Monnier) (1995-02-03)|
|Re: Speeding up LEX scanning times firstname.lastname@example.org (1995-02-03)|
|Re: Speeding up LEX scanning times email@example.com (1995-02-04)|
|Re: Speeding up LEX scanning times firstname.lastname@example.org (1995-02-07)|
|From:||email@example.com (Pieter Hintjens)|
|Keywords:||lex, question, comment|
|Organization:||EUnet Belgium, Leuven, Belgium|
|Date:||Thu, 2 Feb 1995 06:16:59 GMT|
I'm writing a Cobol parser, using MKS Lex and Yacc. So far so good.
However, on seriously large programs, it is quite slow. When I profiled
the code, I noticed that about 80% of the time was in the Lex scanner.
Now, I found that the standard C functions for file access (fread) are
a lot slower than the non-standard read functions, so I shaved off
some time by using these if the compiler supports them.
However, I still find that the scanner is slow. I don't think I made
any mistakes; for instance all keywords are identified by looking-up
a table, rather than as individual scanner tokens.
So my question is: should I consider writing the scanner by hand,
now that I have a working prototype? If so, are there any techniques
I should be aware of?
Pieter A. Hintjens
[I believe that flex is a lot faster than other versions of lex. Try that.
Return to the
Search the comp.compilers archives again.