From: | "Joachim Durchholz" <joachim_d@gmx.de> |
Newsgroups: | comp.compilers |
Date: | 1 Nov 2000 18:49:54 -0500 |
Organization: | Compilers Central |
References: | 00-10-061 00-10-067 00-10-093 00-10-109 00-10-130 00-10-193 00-10-209 00-10-221 |
Keywords: | parse |
Posted-Date: | 01 Nov 2000 18:49:54 EST |
Randall Hyde <rhyde@cs.ucr.edu> wrote:
>
> (1) I would prefer to write my lexer in assembly language (gasp!).
> HLA spends the vast majority of its time in lexical analysis and
> due to the interpreter I've integrated into the compiler
(producing
> what I call the "compile-time language") there is a very
incestuous
> relationship between the parser and lexer (the parser calls the
lexer
> which recursively calls the parser, which recursively calls the
lexer,
> etc.).
Er... are you sure that this mutual recursion is bounded? Many systems
have surprisingly small stacks.
> (3) Being written in Java concerns me. What is the performance like?
Java in general is relatively slow. Whether it's a problem for your
project I cannot say; I think most current-day compilers spend most of
their time outside of scanning and parsing anyway, and your compiler
seems to be in that category, so it's probably not a big issue.
If performance is still a problem, you might want to look for a
Java-to-C (or JVM-to-C) compiler. There are also some commercial tools
that precompile JVM to native code. (I don't have any references handy,
sorry.)
> Parsing consumes a tiny fraction of the time compared with scanning
> (at least, in HLA).
This is a general observation. The conversion of character sequences
to tokens reduces the size of data to be handled by an order or
magnitude, and if the parser doesn't have backtracking or any other
things that require polynomial time then it's even difficult to build
a parser that's slower than the lexer.
HTH
Regards,
Joachim
Return to the
comp.compilers page.
Search the
comp.compilers archives again.