|Compilers For Typeless Languages Will@cup.portal.com (1989-09-15)|
|Re: Compilers For Typeless Languages email@example.com (Stephen Adams) (1989-09-23)|
|Re: Compilers For Typeless Languages plogan@pdx.MENTOR.COM (1989-09-28)|
|From:||Stephen Adams <firstname.lastname@example.org>|
|Date:||Sat, 23 Sep 89 22:34:22 BST|
|Organization:||Southampton University Computer Science|
|In-Reply-To:||Will@cup.portal.com's message of 21 Sep 89 05:12:32 GMT|
Compilation is figuring out and doing some things at compile time
(once) so that those things don't have to be done (possibly many times)
at run time.
In his list of reasons for compiling a `typeless language' I think
that Will missed the main source of performance benefit from
compilation which is that the interpreter's case analysis is done once
at compile time rather than many times during run-time. Interpreting
a loop involves interpreting the loop body for each iteration. Not so
with compiled code.
I would suggest that compilation is worthwhile unless the primitive
operations are all *so* expensive that the interpreter is spending
`all' of the time doing them and `no' time interpreting the program.
I wouldn't write a compiler for a language of arithmetic expressions
like A+B*C of huge matrixes but I would for smaller values like
strings or scalars or (string or scalar)'s .
An interesting technique for writing compilers based on the
interpreter is given on comp.lang.scheme some time ago:
> ... This method may be called (in
> a very loose way) the "threaded code" technique, or enclosing
> everything into a lambda closure. The technique is discussed in
> detail in the following paper:
> %A Marc Feeley
> %A Guy LaPalme
> %T Using Cloures for Code Generation
> %J Journal of Computer Languages
> %V 12
> %N 1
> %P 47-66
> %I Pergamon Press
> %D 1987
Considering the simplicity of their approach the authors report
impressive speedups (which I have verified).
[From Stephen Adams <email@example.com>]
Return to the
Search the comp.compilers archives again.