|VM-friendly GC firstname.lastname@example.org (1992-02-08)|
|Re: VM-friendly GC email@example.com.COM (1992-02-13)|
|APL Compilation (WAS: VM-friendly GC) firstname.lastname@example.org (1992-02-14)|
|Re: APL Compilation (WAS: VM-friendly GC) email@example.com (1992-02-17)|
|Re: APL Compilation (WAS: VM-friendly GC) firstname.lastname@example.org (1992-02-17)|
|Re: APL Compilation (WAS: VM-friendly GC) email@example.com (1992-02-19)|
|Re: APL Compilation (WAS: VM-friendly GC) firstname.lastname@example.org.COM (1992-02-21)|
|From:||email@example.com (David Keppel)|
|Organization:||Computer Science & Engineering, U. of Washington, Seattle|
|Date:||Fri, 14 Feb 92 06:15:49 GMT|
In 92-02-067 the moderator (John Levine) writes:
>[It surprises me that IBM does more APL interpreters, no compilers.]
Compiling APL generally requires dynamic compilation, which is deprecated
by IBM. Note that [Johnston 89] compiles to p-code and then interprets
that because the target architecture/OS requires an OS entry to do dynamic
;-D oN ( Compiling--and recompiling--APL ) Pardo
%A Thomas W. Christopher
%A Ralph W. Wallace
%T Compiling Optimized Array Operations at Run-Time
%J APL 86 Conferance Proceedings
%C Manchester, England
%D July 1986
%X Compiling APL is hard because of lack of static type information.
Three mechanisms for execution were compared: naive interpreter, unrolled
threaded interpreter, dynamic compilation. The dynamic compiler takes a
template with ``holes'', fills in the holes, and executes. (Presumably
the template is reused each time, rather than alloc/copy/free). Sample:
subtraction of <scalar,scalar>, <scalar,5x5x5x5x5>, and
<5x5x5x5x5,5x5x5x5x5>. The dynamic compiler was slower in only one case,
and then by only 1%. In all other cases it was the same as or better than
the threaded interpreter, about 2.5X for operations involving floating
point (float sub float and int sub float), and about 4X for integer only
%A Leo J. Guibas
%A Douglas K. Wyatt
%T Compilation and Delayed Evaluation in APL
%J Fifth Annual ACM Symposium on Principles of Programming Languages
%X Transformation techniques for analyzing and compiling APL. Result is
compiled to bytecodes. The generated code has a preamble to check for
failed assumptions. If the assumptions fail, the compiler is reinvoked
with new assumptions.
%A Ronald L. Johnston
%T The Dynamic Incremental Compiler of APL\e3000
%I Association for Computing Machinery (ACM)
%J APL Quote Quad
%D June 1989
%X Compiles APL on a statement-by-statement basis. Compiles as needed,
caches and reuses code.
* Uses ``drag along'' and ``beating'' to optimize.
* APL dynamic compilation has been used before but only naively.
* Compiled code has prologue tests. If tests fail, compiles a
fragment of suboptimal/general code.
* Each compilation is source->tree->foliated tree->e-code.
* Target machine (HP 3000) has strict I/D separation. Using
loader/linker is ``too slow to be practical for the dynamic
compiler'', so code is compiled to a p-code and interpreted.
%A H. J. Saal
%A Z. Weiss
%T A Software High Performance APL Interpreter
%J Quote Quad
%D June 1979
%W Have not read. [May 87] cites as:
high-performance APL interpreter that makes ``assumptions'' during
translation. If runtime checks (of those assumptions) fail, then the code
runs ``less efficiently''.
[A. J. Perlis and many of his grad students did a lot of APL compiling around
1980. For that matter, I wrote an APL to Basic compiler in my undergrad
compilers class in 1972 that did a fairly good job of avoiding array
temporaries. Beats me why IBM dislikes it, the potential performance win is
large, though I suppose it sells less RAM than an interpreter does. We beta
tested the HP 3000 APL, it performed very well. -John]
Return to the
Search the comp.compilers archives again.