Initialise tokens in bison/flex

Torsten Rupp <rupp@fzi.de>
9 Mar 2002 03:16:01 -0500

          From comp.compilers

Related articles
Initialise tokens in bison/flex rupp@fzi.de (Torsten Rupp) (2002-03-09)
Re: Initialise tokens in bison/flex haberg@matematik.su.se (2002-03-11)
Re: Initialise tokens in bison/flex {spamtrap}@qeng-ho.org (Maneki Neko) (2002-03-17)
Re: Initialise tokens in bison/flex clint@0lsen.net (Clint Olsen) (2002-03-17)
Re: Initialise tokens in bison/flex chrisd@reservoir.com (2002-03-17)
| List of all articles for this month |

From: Torsten Rupp <rupp@fzi.de>
Newsgroups: comp.compilers
Date: 9 Mar 2002 03:16:01 -0500
Organization: University of Karlsruhe, Germany
Keywords: parse, yacc, question
Posted-Date: 09 Mar 2002 03:16:00 EST

Hello,


Does anybody know how I can handle resource allocation/free resources
in tokens for bison/flex? E. g. allocate a dynamic array when scanning
a token and free this memory after the last usage. Is there some
interface for this in bison or flex? How to handle possibly unlimited
data size in tokens (e. g. comments which should be returned by flex,
too)? A manual allocation of resources can be done in the scanner
action, free can be done in rules of the grammar, but is there some
other method? The parser knows which tokens are currently in use, thus
he can call some function token_init() before and can call some
function token_done() after a token is released. Can this be
implemented and how? Any help is welcome.


Torsten
--
Computer Science Research Center (FZI) phone : +49-721-9654-330
Mobility Management and Robotics fax : +49-721-9654-309
Haid-und-Neu-Str. 10-14 e-mail: rupp@fzi.de
D-76131 Karlsruhe (Germany) www : www.fzi.de/robot
[lex, yacc, and their variants don't handle this very well, particularly
if you try to do error recovery. My usual approach is to chain all the
allocated storage together, then zip through after each parse and give
it all back. -John]



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.