Re: Compiler Output

ll-xn!ames!oliveb!felix!preston@BBN.COM
Fri, 11 Dec 87 12:07:16 pst

          From comp.compilers

Related articles
Compiler Output gm@amdahl.amdahl.com (1987-12-08)
Re: Compiler Output rab@mimsy.UUCP (1987-12-09)
Re: Compiler Output ll-xn!ames!oliveb!felix!preston@BBN.COM (1987-12-11)
Re: Compiler Output haddock!uunet!uiucdcs!ccvaxa!aglew (1987-12-19)
| List of all articles for this month |

Date: Fri, 11 Dec 87 12:07:16 pst
From: ll-xn!ames!oliveb!felix!preston@BBN.COM
Newsgroups: comp.compilers
In-Reply-To: <777@ima.ISC.COM>
Organization: FileNet Corp., Costa Mesa, CA

In article <777@ima.ISC.COM> you write:


>My personal bias ...favors the generation of assembler source code.
>...you have to offer the option of generating source anyway, and it's
>redundancy on an immense scale to incorporate assembler-like logic (and
>possibly even disassembler-like logic) in the compiler's back end.


The main objection to generating assembler source is fact that a large
fraction of the time in the assembler is spent in lexical analysis.
Especially in the case of simple, fast assemblers. It seems kind of
silly to convert the compiler's notion of the instructions that should
be generated into text, and then have the assembler convert the text
back into an internal representation.


At the same time seems a duplication of effort to have _both_ the
assembler and compiler know the grittier details of generating
instructions and object files. (You've heard this before).


It would seem that you could answer both objections by having the back
end of the compiler generate a simple token stream. The token stream
would be identical to what the assembler would have gotten from lexical
analysis of assembler source text.


For example, if you thought of a conventional compiler assembler
combination as:


compiler:
<front end of compiler>
<instruction to text conversion>
assembler:
<text to token stream (lexical analysis)>
<back end of assembler>


A compiler that generated a token stream could look like:


compiler:
<front end of compiler>
<instruction to token conversion>
(assembler:)
<back end of assembler>


The point is to make the token stream format as simple as possible,
essentially identical to what the assembler would have derived from the
lexical analysis of equivalent assembler source. This keeps the
interface between the compiler and assembler as "narrow" as possible.


A simple token stream format would make it trivial to write a token
stream to text filter.


Other interesting combinations are, to generate assembler source:


<front end of compiler>
<instruction to token conversion>
<token stream to text filter>


To debug the assembler front end:


<text to token stream (lexical analysis)>
<token stream to text filter>


I'm not claiming that any of this is original. In fact, I'll bet that
some of the compilers out there are implemented in this way.
--
Preston L. Bannister
USENET : ucbvax!trwrb!felix!preston
BIX : plb
CompuServe : 71350,3505
GEnie : p.bannister
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.