Re: Compiler and interpreter origins

Dick Weaver <rweaver@ix.netcom.com>
5 Aug 2004 14:20:17 -0400

          From comp.compilers

Related articles
Compiler and interpreter origins la@iki.fi (Lauri Alanko) (2004-07-28)
Re: Compiler and interpreter origins Jeffrey.Kenton@comcast.net (Jeff Kenton) (2004-08-04)
Re: Compiler and interpreter origins rweaver@ix.netcom.com (Dick Weaver) (2004-08-05)
Re: Compiler and interpreter origins gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-08-05)
Re: Compiler and interpreter origins rbates@southwind.net (Rodney M. Bates) (2004-08-09)
Re: Compiler and interpreter origins nick.roberts@acm.org (Nick Roberts) (2004-08-09)
Re: Compiler and interpreter origins nmm1@cus.cam.ac.uk (2004-08-09)
Re: Compiler and interpreter origins slimick@venango.upb.pitt.edu (John Slimick) (2004-08-09)
Re: Compiler and interpreter origins Martin.Ward@durham.ac.uk (Martin Ward) (2004-08-10)
[6 later articles]
| List of all articles for this month |
From: Dick Weaver <rweaver@ix.netcom.com>
Newsgroups: comp.compilers
Date: 5 Aug 2004 14:20:17 -0400
Organization: EarthLink Inc. -- http://www.EarthLink.net
References: 04-07-077
Keywords: history
Posted-Date: 05 Aug 2004 14:20:17 EDT

Lauri Alanko wrote:
>
> I am trying to understand the mindset prevalent during the advent of
> high-level programming languages, especially regarding compilers and
> run-time evaluation. I hope someone can shed some historical insight.


Some mindsets that come to mind (reading over this list, I can't really
distinguish between mindset and environment):


1. Compilers were thought of as "Automatic Programming". An early
report on Fortran, 1957, was "The Fortran Automatic Coding System"


2. There was only one criteria for good vs. bad compilers: efficiency of
the generated code.


3. A mindset of limited resources; useful, economic, results were
produced on machines with memories so small that multiplier acronyms
("K" for example) were not needed in size specifications. And schedules
were in months, not years.


4. A computer could be a human being (who operated a Frieden or other
calculator). You can find books of mathematical tables where the
computers - the names of people - are listed on the title page.


5. Users with problems requiring computer solutions took their problem
to a programmer. Only after high-level languages "advented" (in
particular, FORTRAN) did it become COMMON for users to write their own
programs.


6. Cascading compilers. For example, FORTRANSIT (IBM 650) input was
FORTRAN, the compiler output was an IT program. The IT compiler
produced SOAP (the 650s assembly language) output. The SOAP processor
produced your object deck.


7. Languages were sometimes designed/implemented by people trying to
solve their own problems. IPL, LISP being examples.




> Firstly, back when everything was done in pure machine code or
> assembly, how common was the use of self-modifying code? ...


See "The Preparation of Programs for an Electronic Digital Computer",
Wilkes, Wheeler, & Gill, 1951, page 8, "Modification of orders by the
program".


> Was it only used for things like inlined loop counters, or was there
> anything like run-time generation of non-trivial code? In a word,
> was it seen as a useful thing that a von Neumann architecture
> general-purpose microprocessor really was a universal Turing machine
> that could be reprogrammed on the fly?


> When batch compilers became popular, such flexibility was obviously
> lost: you couldn't generate new Fortran code on the fly from your
> original Fortran program. Was this ever seen as a problem?


Well, it wasn't seen as a problem because you could do it if you wanted
to (I assume that few people wanted to! - another mindset thing). Not
in complete generality, but you are asking about the "advent" - when
things got started. Look up FORMAC (Sammet). I think you do things
like (where "set" assigns the expression, not the value of the
expression. Set is not the correct FORMAC keyword, but it will do for
this example)


                                      set v1 = a + b
                                      set v2 = c + d
                                              v3 = v1 * v2


                  v3 would then have the value ac + ad + bc + bd
                  There must have been an evaluate function.


> Interpreters, of course, easily support run-time code generation
> simply by allowing the interpreter to be called from within the
> program. Lisp certainly supported eval from day one. But did
> McCarthy's team _invent_ the idea of reading in high-level code at
> run-time and then interpreting it (instead of compiling it to machine
> code beforehand), or was interpretation an older idea? Of course UTMs
> and universal functions had been known, but when was it realized that
> the same idea could be applied in practice?


Interpreters have "always" been here. Well, if not from day 1,
certainly from day 2.
Lookup "Speedcoding" for the IBM 701, about 1953. Or "Short Code" for
the UNIVAC. Or Laning/Zierler.


Fortran IV could "read in high-level code at run-time and then interpret
it". Only FORMAT specifications, but exactly the function you asked
about.
I don't know if this functionality was in any Fortran II implementations
or not, nor do I have dates. Comparable to LISP? Of course not, but you
asked about beginnings.


There was a language TRAC (Deutsch?) for the PDP-1 where the a program
could modify itself (I think).


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.