Related articles |
---|
Unit testing a compiler dido@imperium.ph (Rafael R. Sevilla) (2013-03-13) |
Re: Unit testing a compiler cr88192@hotmail.com (BGB) (2013-03-13) |
Re: Unit testing a compiler bobduff@shell01.TheWorld.com (Robert A Duff) (2013-03-13) |
Re: Unit testing a compiler walter@bytecraft.com (Walter Banks) (2013-03-19) |
From: | BGB <cr88192@hotmail.com> |
Newsgroups: | comp.compilers |
Date: | Wed, 13 Mar 2013 02:59:59 -0500 |
Organization: | albasani.net |
References: | 13-03-009 |
Keywords: | testing, debug |
Posted-Date: | 13 Mar 2013 09:24:34 EDT |
On 3/12/2013 10:50 PM, Rafael R. Sevilla wrote:
> I am currently writing a simple byte compiler for a simple dialect of
> Lisp, and am wondering what is the best practice for testing such a
> compiler. It is certainly possible to run the compiler on some test
> code and then compare the code it generates with some reference code,
> but as the code samples become more complicated that rapidly becomes
> unwieldy. Would it instead be better to test the compiler by actually
> running the code it generates, and then comparing the results with what
> the code snippet is supposed to evaluate to? Any other ideas on how to
> go about testing the code generator? What about when optimizations are
> being performed?
>
(mostly writing from personal experience).
generally better I think is rather than testing that the output
matches a specific pattern, is testing that the output code runs,
exhibits the expected behavior, and generates the expected output
values.
for example, if the generated code generates an exception, runs into
an infinite loop, generates bad values, ... then the test can be
considered as having failed.
in this sense, the generated code is to some extent a "black box" as
far as the tests are concerned, but the tests can be more concerned
that it works correctly.
the tests may also contain benchmarks, and maybe any generated code
(such as bytecode and/or ASM), ..., can also be dumped to a log so that
it can be looked over if needed.
IME: determining whether the generated code "makes sense" is generally
something which requires human involvement, and it may also make sense
to dump out the data for each stage in the process, to hopefully make it
easier to figure out at which point things went wrong (is the bug in the
parser, bytecode-compiler, the JIT, or somewhere else?).
granted, there are a few possible drawbacks:
edge cases or "cracks" where bugs can hide, generally due to a specific
feature not being tested sufficiently, or possibly at all;
being lazy and putting off running the tests for too long, and then bugs
creep in.
a drawback is that the more complex a language or compiler becomes, the
more exhaustive the tests that are needed to be able to detect bugs,
which although not necessarily mandating minimalism, is a reason to
refrain from throwing too many extra or unnecessary features onto a
language (a feature which may seem like a good idea at one point may
later turn out to have not been such a good idea).
also, it makes sense to make sure things work before worrying too much
about optimizing them. trying to optimize buggy or broken code may often
lead to a mess which is difficult to debug or to fix. it is generally
better to have a naive implementation than a broken one.
granted, my language isn't really all that simple at this point, having
a fair amount of syntactic and semantic similarity with "mainstream OO
languages". IOW: C-like syntax, class/instance, more-or-less statically
typed (still uses tagged references though), ... though, also with a lot
of script-language features as well.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.