|Looking for an approach...Incremental Optimization email@example.com (Ray S. Dillinger) (1996-12-07)|
|Re: Looking for an approach...Incremental Optimization firstname.lastname@example.org (1996-12-09)|
|From:||"Ray S. Dillinger" <email@example.com>|
|Date:||7 Dec 1996 23:12:52 -0500|
Hi. I have a problem.
There is a tradeoff when building a compiler; you can make it a
screamingly fast compilation, or you can make screamingly fast machine
code. Not both (though if you code badly, 'neither' is easy....).
This was never a problem for me, as I always made my choice in
languages that were statically compiled -- screamingly fast machine
code, all the stops removed, was the only way I released anything, and
*hang* compile time considerations; they don't matter to your users at
But now I'm dealing with *dynamically* compiled languages (scheme to
be exact) and compilation of computed (or user-input) expressions is
part of the runtime. And some of these expressions are executed
thousands of times and some (many) of them once.
So, for something that gets run once, the "correct" approach would be
a screamingly fast compile to a relatively slow interpreted bytecode
with no optimizations other than those you can get for free. But if
the code had executed ten times, I'd figure he was going to execute it
another few dozen times and spend some effort optimizing it. And if it
ran a hundred times it might be worth the effort to optimize pretty
well and then rewrite it directly into machine code.
Are there any references available on implementing this sort of
adaptive behavior? This is looking really hairy, and I'd hate to
reinvent the wheel.
I should mention that for my application the size of the generated
executable is not a critical restriction.
[I'd look at the Smalltalk work. Also, the old HP3000 APL compiler did
something like this, recompiling as it discovered more about the arguments
passed to routines. -John]
Return to the
Search the comp.compilers archives again.