Syntax Directed Test Generation

Dave Lloyd <Dave@occl-cam.demon.co.uk>
4 May 1997 22:44:56 -0400

          From comp.compilers

Related articles
Syntax Directed Test Generation cswart@glacier.analogy.com (1997-04-22)
Re: Syntax Directed Test Generation cef@geodesic.com (Charles Fiterman) (1997-04-30)
Re: Syntax Directed Test Generation riehl@rose.rsoc.rockwell.com (1997-05-03)
Re: Syntax Directed Test Generation matthys@cs.ruu.nl (1997-05-04)
Syntax Directed Test Generation Dave@occl-cam.demon.co.uk (Dave Lloyd) (1997-05-04)
Re: Syntax Directed Test Generation jejones@microware.com (1997-05-08)
Re: Syntax Directed Test Generation markagr@aol.com (1997-05-08)
| List of all articles for this month |

From: Dave Lloyd <Dave@occl-cam.demon.co.uk>
Newsgroups: comp.compilers
Date: 4 May 1997 22:44:56 -0400
Organization: Compilers Central
References: 97-04-159
Keywords: syntax, testing, parse

Chuck Swart <cswart@glacier.analogy.com> wrote:
> I am interested in the following problem: Given a grammar in BNF (or
> perhaps EBNF) automatically generate a set of test cases which will
> cause all productions in the grammar to be used when these test cases
> are parsed.


A simple approach is to use the grammar as a template to generate
random tests. I don't know of any tools to automate this but the
principle is straight-forward: traverse the grammar randomly choosing
between alternatives. You can bias the random choice to ensure that
every alternative is tried within a reasonable number of tests. BNF
grammars are rather weak for real languages so you may want to record
some context with any scope (so that for example all identifiers
applied have been declared). Random selection gives a better
distribution of test cases than a simple combinatorial expansion which
excludes rules once they are applied. You can also use a random
generator to 'soak test' a parser.


You can probably knock together a basic random program generator in an
afternoon (Python with its good string and list handling is very
convenient for this sort of thing).


What I think are harder to generate are good erroneous examples to
test error recovery. A random generator which randomly breaks rules
every so often, is not bad and can find some surprising holes but no
substitute for real user broken source.


Cheers,
----------------------------------------------------------------------
Dave Lloyd mailto:Dave@occl-cam.demon.co.uk
Oxford and Cambridge Compilers Ltd http://www.occl-cam.demon.co.uk/
Cambridge, England http://www.chaos.org.uk/~dave/
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.