Non-sequential compilation.

KODIS@delphi.com
Sat, 18 Sep 1993 00:10:41 GMT

          From comp.compilers

Related articles
Non-sequential compilation. KODIS@delphi.com (1993-09-18)
Re: Non-sequential compilation. macrakis@osf.org (1993-09-21)
Re: Non-sequential compilation. cliffc@rice.edu (1993-09-21)
Re: Non-sequential compilation. pcg@aber.ac.uk (1993-09-21)
Re: Non-sequential compilation. conway@mundil.cs.mu.OZ.AU (1993-09-22)
Re: Non-sequential compilation. gafter@mri.com (1993-09-27)
Re: Non-sequential compilation. rbe@yrloc.ipsa.reuter.COM (1993-10-04)
| List of all articles for this month |

Newsgroups: comp.compilers
From: KODIS@delphi.com
Keywords: question, comment
Organization: Compilers Central
Date: Sat, 18 Sep 1993 00:10:41 GMT

It's been awfully quite in this conference for the past few weeks, so let
me toss out a question in the hopes of instigating some activity here.


A great deal of compiler theory has a very stream-oriented flavor --
left-to-right scans, single vs. multi-pass translators, and so on. It
seems that at least one of the reasons for this is a historical desire to
be able to translate an arbitrarily large source file by reading a chunk,
emitting code, and repeating. However, today even very modest computers
have memory capacities far in excess of the size of any source file likely
to be translated. Given this situation, is the sequential, stream
orientated processing of current compiler theory something which is being
clung to beyond its usefulness?


My question is, has any research been conducted into an "all-at-once"
translation scheme, where a source file is completly read or mapped into
memory before translation begins? Is there anything to be gained by such
an approach? If all it buys is the ability to scan through the source
quickly, that's not much of a gain. But if there is some alternate
processing scheme which works by performing several quick passes over a
source file instead of one or two slower ones, that's a bit more
interesting.


Any thoughts on this from anyone?


-- John.
[Back in olde days, it was fairly common in limited memory environments
(i.e., everywhere) to load the source file into core and then load lots
of compiler phases that did transformations on the in-core data. -John]
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.