Compiler and interpreter origins

Lauri Alanko <la@iki.fi>
28 Jul 2004 12:17:04 -0400

          From comp.compilers

Related articles
Compiler and interpreter origins la@iki.fi (Lauri Alanko) (2004-07-28)
Re: Compiler and interpreter origins Jeffrey.Kenton@comcast.net (Jeff Kenton) (2004-08-04)
Re: Compiler and interpreter origins rweaver@ix.netcom.com (Dick Weaver) (2004-08-05)
Re: Compiler and interpreter origins gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-08-05)
Re: Compiler and interpreter origins rbates@southwind.net (Rodney M. Bates) (2004-08-09)
Re: Compiler and interpreter origins nick.roberts@acm.org (Nick Roberts) (2004-08-09)
Re: Compiler and interpreter origins nmm1@cus.cam.ac.uk (2004-08-09)
[8 later articles]
| List of all articles for this month |
From: Lauri Alanko <la@iki.fi>
Newsgroups: comp.compilers
Date: 28 Jul 2004 12:17:04 -0400
Organization: University of Helsinki
Keywords: history, question
Posted-Date: 28 Jul 2004 12:17:04 EDT
Originator: lealanko@cc.helsinki.fi (Lauri Alanko)

I am trying to understand the mindset prevalent during the advent of
high-level programming languages, especially regarding compilers and
run-time evaluation. I hope someone can shed some historical insight.


Firstly, back when everything was done in pure machine code or
assembly, how common was the use of self-modifying code? Was it only
used for things like inlined loop counters, or was there anything like
run-time generation of non-trivial code? In a word, was it seen as a
useful thing that a von Neumann architecture general-purpose
microprocessor really was a universal Turing machine that could be
reprogrammed on the fly?


When batch compilers became popular, such flexibility was obviously
lost: you couldn't generate new Fortran code on the fly from your
original Fortran program. Was this ever seen as a problem?


Interpreters, of course, easily support run-time code generation
simply by allowing the interpreter to be called from within the
program. Lisp certainly supported eval from day one. But did
McCarthy's team _invent_ the idea of reading in high-level code at
run-time and then interpreting it (instead of compiling it to machine
code beforehand), or was interpretation an older idea? Of course UTMs
and universal functions had been known, but when was it realized that
the same idea could be applied in practice?


I'm sorry about the vagueness of these questions. Any remarks or
references relating to the subject are appreciated.


Lauri Alanko
la@iki.fi
[Back in the 1950s, people used every coding trick they could to
squeeze programs into tiny memories, including all sorts of
self-modifying code. For a famous, probably apochryphal, story that
captures the way that people programmed then, see
http://catb.org/~esr/jargon/html/story-of-mel.html


There were lots of interpreters in the 1950s, typically running a
machine-like code that was higher level than the real machine code,
e.g., with floating point and index registers that the underlying
machine didn't have. Those survived into the 1960s. The standard
float package on the PDP-8 was an interpreter that had instructions
with the same address formats as the real instructions so you could
use the regular assembler to code them by renaming the opcodes. I
think that the idea of having an external representation that mapped
to and from an internal representation that could be interpreted was
new to Lisp. For a long time it was seen as an interim hack until
Lisp 2 with a nicer syntax was ready. -John]



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.