Parser validation/test suites ?

Kenn Heinrich <kennheinrich@sympatico.ca>
9 Aug 2006 00:08:09 -0400

          From comp.compilers

Related articles
Parser validation/test suites ? kennheinrich@sympatico.ca (Kenn Heinrich) (2006-08-09)
Re: Parser validation/test suites ? 148f3wg02@sneakemail.com (Karsten Nyblad) (2006-08-10)
Re: Parser validation/test suites ? DrDiettrich1@aol.com (Hans-Peter Diettrich) (2006-08-10)
Better error recovery cfc@shell01.TheWorld.com (Chris F Clark) (2006-08-12)
Better error recovery Colin_Paul_Gloster@ACM.org (Colin Paul Gloster) (2006-08-14)
Re: Parser validation/test suites ? Colin_Paul_Gloster@ACM.org (Colin Paul Gloster) (2006-08-14)
Re: Parser validation/test suites ? diablovision@yahoo.com (2006-08-14)
[1 later articles]
| List of all articles for this month |
From: Kenn Heinrich <kennheinrich@sympatico.ca>
Newsgroups: comp.compilers
Date: 9 Aug 2006 00:08:09 -0400
Organization: Compilers Central
Keywords: testing, question
Posted-Date: 09 Aug 2006 00:08:09 EDT

A question perhaps more software engineering than compiler theory, but
here goes:


How do people construct a good validation and regression test system
for a compiler, in particular a parser module? I'm thinking along the
lines of a set of test source programs, and expected results. Some
kind of script automagically runs all the sources, and produces a
simple log file (Web page, ...?) which could be checked into your
version control for tracking the compiler right along with the source.


How this set might be organized? By directories, by a suite that must
pass, a suite that must fail, a suite that tests optional extensions,
etc...


And how would you best indicate pass or fail? I've seen systems using
special embedded comments that should match a compiler output, parsed
out and chcked by a test suite dispatcher script, as well as systems
that organize "good" source in one directory, "should fail with error
XXX" sources in directory "XXX". What are some of the other schemes
the masters recommend?


And what types of source ought to go into a good test suite? A set of
small programs, each excercising a small section of the grammar or
semantic checker? Or one big program filled with nuance and subtlety?


How about grammar testing? I know there's a trick of introducing a
special keyword or token into the token stream to allow parsing from
arbitary productions (any production = goal), has anyone tried building
a "grammar production tester" which lets you run a bunch of tests of one
production in isolation? For example, a set of files containing only
<expr> or <stmt> text to avoid having to repeatedly boilerplate a
complete, legal top-level program just to check a simple statement.


The background for this is an ML VHDL parser, which is basically a set
of workarounds for a set of workarounds for a set of almost-yaccable
grammar productions. Every fix to the parsing and analysis phase
re-breaks the last bug fix, so automatic test and regression is a
necessity.


Cheers,


  - Kenn


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.