Re: What should be check in Lexical Analyzer along with generating tokens?

"VBDis" <vbdis@aol.com>
19 Sep 2002 01:11:44 -0400

          From comp.compilers

Related articles
What should be check in Lexical Analyzer along with generating token vikramasanjeeva@hotmail.com (Vikrama Sanjeeva) (2002-09-14)
Re: What should be check in Lexical Analyzer along with generating vbdis@aol.com (VBDis) (2002-09-19)
| List of all articles for this month |

From: "VBDis" <vbdis@aol.com>
Newsgroups: comp.compilers
Date: 19 Sep 2002 01:11:44 -0400
Organization: AOL Bertelsmann Online GmbH & Co. KG http://www.germany.aol.com
References: 02-09-087
Keywords: lex
Posted-Date: 19 Sep 2002 01:11:44 EDT

<vikramasanjeeva@hotmail.com> schreibt:


>The primary job of Lexical Analyzer is to generate tokens.But what
>other functionality is added in Lexical Analyzer in order to make it
>efficient?


You can add whatever is meaningful in a specifc project (compiler...).


I found an interesting approach to an "scannerless parser" in the
Stratego project
<http://www.program-transformation.org/twiki/bin/view/Transform>. There
the scanner is a parser for specific "input tokens", i.e. for
characters. The rules for the composition of higher level tokens
(symbols, literals...) are part of the overall grammar, so that no
explicit separation between the tasks of the scanner and parser is
required.


Of course this approach is not so runtime efficient as the use of
separately optimized scanners and parsers in traditional compilers,
but very useful for other tools. Do I hear sombody shout "UNCOL"? Yes,
the Stratego project is related to the handling and transformation of
source code of various languages, where the typically ignored
whitespace, remarks, and other formatting information, is of special
concern.


DoDi


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.