Re: A question about Self (Urs Hoelzle)
Sat, 29 Jan 1994 23:05:43 GMT

          From comp.compilers

Related articles
A question about Self (1994-01-27)
Re: A question about Self (1994-01-27)
Re: A question about Self (1994-01-29)
Re: A question about Self (1994-01-31)
| List of all articles for this month |

Newsgroups: comp.lang.misc,comp.compilers
From: (Urs Hoelzle)
Keywords: Self, optimize
Organization: Computer Science Department, Stanford University, Calif., USA
References: 94-01-113
Date: Sat, 29 Jan 1994 23:05:43 GMT (Graham Matthews) writes:

>Last night I re-read the Self
>papers from the Stanford archive, and was a little puzzled by something,
>namely does Self actually do dynamic code generation, that is code
>generation while a program is running?. I ask this question because it
>appears to me that :-

>a) several of their techniques are/could be static compile time
>optimisations. The most notable is type prediction which could be done at
>compile time. Maybe it is?

>b) for several techniques it was not clear whether they were run time or
>compile time. For example iterative type analysis can be seen as a form of
>compile time type inference, or it could be done at run-time...

There's a confusion here about what "compile time" means. Traditionally,
"compile time" means "before program execution starts". For systems using
runtime (or dynamic) compilation (such as Self), "compile time" just means
"while the compiler runs", but since compilation is interspersed with
runtime, it has nothing to do with the traditional notion of "compile
time". Thus,

- In Self there is no upfront, batch-style compilation phase. There
    is no "compile" or "make" command, either -- the system acts like an
    interpreter and compiles code as needed (i.e., at runtime).

- Of course, type analysis is done at "compile time" since the compiler
    does it. But it is not done statically (based on the program text
    alone, before program execution begins).

The newest Self system (Self-93) actually goes a step beyond what
Chambers' Self-91 system does. Rather than using type analysis, it
employs *type feedback*, i.e., extracts information from the running
program and recompiles parts of a program using that information. The
additional information results in both significantly faster code and much
faster compilation (Self-93 uses a fast non-optimizing compiler to
generate the initial code, and lazily optimizes the critical parts of
programs). [An upcoming PLDI paper will contain more details.]

Much of what is achieved by runtime compilation could also be achieved by
traditional (batch-style) compilers if they used "training runs". During
the training run, an instrumented version of the program collects the
necessary information, which is then fed back into the compiler to
generate the final version of the program. (Some compilers can already do
this by using profile information to assist classical optimizations, or to
improve the program's cache behavior. Examples are the IMPACT compiler
(U. Illinois) and the MIPS compilers) Of course, what you lose is the
dynamic nature of runtime compilation: if your training run isn't
representative of real runs, your performance may sufer. A system using
runtime compilation can react to changing circumstances and recompile code

Hope that helps,

Urs Hoelzle urs@cs.stanford.EDU
Computer Systems Laboratory, CIS 57, Stanford University, Stanford, CA 94305

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.