|[7 earlier articles]|
|Re: Testing strategy for compiler email@example.com (Matthias-Christian Ott) (2010-06-19)|
|Re: Testing strategy for compiler firstname.lastname@example.org (glen herrmannsfeldt) (2010-06-19)|
|Re: Testing strategy for compiler email@example.com (Jean-Marc Bourguet) (2010-06-21)|
|Re: Testing strategy for compiler firstname.lastname@example.org (Tony Finch) (2010-06-21)|
|Re: Testing strategy for compiler email@example.com (George Neuner) (2010-06-21)|
|Re: Testing strategy for compiler firstname.lastname@example.org (Andy Walker) (2010-06-22)|
|Re: Testing strategy for compiler email@example.com (Barry Kelly) (2010-06-22)|
|Re: Testing strategy for compiler firstname.lastname@example.org (glen herrmannsfeldt) (2010-06-23)|
|From:||Barry Kelly <email@example.com>|
|Date:||Tue, 22 Jun 2010 22:22:45 +0100|
|Keywords:||Pascal, design, testing|
|Posted-Date:||23 Jun 2010 09:57:57 EDT|
> Say I have written a hand crafted lexer/parser/code gen.... to make a
> complete compiler. The question is how to test it? Since user can have
> millions possible ways of writing their program (with many different
> types of syntax errors) and it is difficult to test all the possible
> cases. So is there any good ways to test the compiler? How do those
> big guys (MS/Borland...) tested their compiler? Thanks.
I can speak a little to how Borland tested their compiler, as I now help
maintain Delphi (and worked at Borland when it was still part of
1) Large corpus of code which is expected to compile, and is developed
with continuous integration. If you break the compiler, the whole build
tree (including IDE etc.) will likely fail, or one of its tests. This
only checks the good case, of course.
2) Testing tools which feed code to the compiler, with either expected
compiler failures or success, and then run the code (if success) and
check for expected output.
3) A large corpus of tests for such tools built up over the years, from
three sources: compiler developers, QA, and bug reports.
4) Automating all this and running it continuously to discover
regressions as soon as possible
Sometimes it comes down to combinatorial testing. To elicit overload
resolution abnormalities, for example, I wrote a tool which generated
overloaded declarations and expressions with different kinds of types
(e.g. object types like Animal, Feline, Lion, Bird, Stool, interfaces
like IFourLegged) and then exhaustively constructed a sorted order to
determine which overloads were preferred by what kinds of arguments, and
which ones led to an ambiguity error.
Eric Lippert (a C# compiler dev for MS) wrote a recent series of blog
posts about enumerating all possible sentences for a particular grammar.
Obviously, that's an infinite set, but it's possible to pull random
cases out of it. Add in some fuzz testing (i.e. deliberate corruption of
that input), and you can also look for expected errors that aren't
Return to the
Search the comp.compilers archives again.