High Level Assemblers vs. High Level Language Compilers

whopkins@csd.uwm.edu (Alfred Einstead)
19 Mar 2002 16:19:14 -0500

          From comp.compilers

Related articles
High Level Assemblers vs. High Level Language Compilers whopkins@csd.uwm.edu (2002-03-19)
Re: High Level Assemblers vs. High Level Language Compilers rhyde@cs.ucr.edu (Randall Hyde) (2002-03-21)
Re: High Level Assemblers vs. High Level Language Compilers idbaxter@semdesigns.com (Ira D. Baxter) (2002-03-22)
Re: High Level Assemblers vs. High Level Language Compilers fjh@cs.mu.OZ.AU (2002-03-22)
Re: High Level Assemblers vs. High Level Language Compilers rhyde@cs.ucr.edu (Randall Hyde) (2002-03-24)
Re: High Level Assemblers vs. High Level Language Compilers rhyde@cs.ucr.edu (Randall Hyde) (2002-03-24)
Re: High Level Assemblers vs. High Level Language Compilers kgw-news@stiscan.com (2002-03-24)
[5 later articles]
| List of all articles for this month |
From: whopkins@csd.uwm.edu (Alfred Einstead)
Newsgroups: comp.compilers
Date: 19 Mar 2002 16:19:14 -0500
Organization: http://groups.google.com/
Keywords: assembler
Posted-Date: 19 Mar 2002 16:19:13 EST

>From Aaron Spink <spink@kraftwerk.pa.dec.com> writes:
> since C is really just a very very high level macro assembler.


From: Ketil Z Malde (ketil@ii.uib.no)
>...yet people complain about C being hard to optimize for modern CPUs,
>which are superscalar and do OOO scheduling and whatnot.
>
>Which makes me curious, what would a "very high level macro assembler"
>for modern processors look like? Could you design a language that
>would make a difference - i.e. be faster and more/as portable than/as
>C?


I try to answer that below. The CAS-8051 assembler (now rereleased
at www.csd.uwm.edu/~whopkins/8051/index.html) was intended to be
the first phase in a much more encompassing high-level assembler
suitable for all processors. Nonetheless, it's functioned beautifully
even in that regard ... at least for the projects I've carried out.


Partly on account of it, I've been able to maintain a high level
programming language style even for assembly and the code looks
so much like C that for 2 products I developed it was a matter of
a week apiece of very straightforward line-by-line recoding to get
them into C, ported to a standalone environment to a PC-based
environment.


There were two major problems which were unresolved which blocked
that line of development: (1) a comprehensive scheme for properly
handling the weirdness associated with the way assemblers allow for
references to yet-to-be-defined addresses but yet allows them to
be used in assembler directives and expressions (a VERY nasty
recursion issue lurks beneath this), (2) a sensible macro language
that works by pattern-matching (the problem being: "matching what?")
and is so powerful that you can define the target processor's
assembly language, itself, ENTIRELY through it (if you were
so inclined).


C is not a high level macro assembler. The compiler is performing
language translation which (among other things) means having its
own abstract run-time model ingrained into the code (the run-time
system) and a code-to-code mapping of control flow structures.


A high-level assembler, in contrast, would be more like a macro
language whose control flow constructs direct the generation of
binary, rather than translate INTO the binary.


if (X < 16) {
      setb B; nop; nop;
} else {
      jc Addr;
}


controls which statement is assembled, whereas a high-level
sequence


if (X < 16) Z++; else U = U->Next;


IS the statement!


The analogue to the "high-level" part of the assembler in C would be
its macro preprocessor, if it had also included #while, #for,
#switch directives in addition to #if, #else and #endif.


These are two VERY DIFFERENT things! One high-level assembler
generates code, the high level language IS the code.


Assemblers are code generators, whereas compilers are translators.


Now to the matter at hand: what's an ideal high-level assembler?
I'll answer that by answering what CAS was intended to look like.


CAS is supposed to be a macro language which directs the generation
of "db" statements: db generates individual byte-sized pieces of code
into the memory image. Everything is either a "db" statement or a
directive built out of macros or macros supplied with the particular
port of the assembler.


The built-in macros include, as a subset, the mnemonic set of the
target processor. For, for example, you'd have a built-in macro
like:


define setbit(bseg B) { db 0xd2, B; }


assuming the target processor's binary for the mnemonic "setbit"
was (A2 <B>).


This implies the need, in the language, to distinguish between
different "address" types; as illustrated above with the type
designator "bseg" (referring to the bit-address region of the
target processor).


You should be able to state a second definition like this:


define setbit(carry) { db 0xd3; }


which is called as "setbit C", assuming that the register name
"C" had been defined as the name of the "carry" register.


Mnemonics may be overloaded. So, macros need to be resolved
by pattern-matching. Another macro might look like this:


define move(accum + @ (reg R), #D) { ... }


which matches (for instance) to "move A + @R3, #35.


Both of these imply the need to be able to (re)name registers in the
language and to even define the underlying register-segment
model used by the processor. This can get hairy. For
instance, two major register segments for the x86 could be
defined as:


_Xseg (including the registers named AX, CX, DX, BX)
_LHseg (including the registers amed AL, CL, DL, BL, AH, CH, DH, BH)


which the x86 actually treats as 2 separate address spaces with
respect to its instruction binary coding, but whose mapping to
the internal logical (register) address space overlaps.


For the 8051, you have the "rseg" area which has 8 addresses that
have the built-in names R0 through R7.


Both of these examples illustrate that there is an extra layer of
distinction over and above the "logical address" layer. Example:
rseg's mapping to the logical address space is run-time defined and
so isn't even known at assemble-time.


Or, consider the example:


define jump(cseg C) {
1:
      if (C - 1b < -0x7f || C - 1b >= 0x80) db 0x80, (C - 1b)
      else if ((C&0xf800) == (1b&0xf800)) db 0x01 | (C&0x700) >> 7, (C&0xff);
      else { db 0x02; dw C; }
}


where the anonymous label "1" is referred to by "1b".


This one is particularly nasty because it shows the entanglement
involved in calculating code addresses. The address following this
instruction depends on which branch is used by the assembler. But the
branch used by the assembler depends on what C and 1: are -- and C,
itself, may be an address which follows the application of a "jump"
instruction so that its address is only known once the size of the
"jump" instruction is known.


The high level assembler has to be able to find solutions to circular
systems such as this. It need not find the OPTIMAL solution, but
there has to be some kind of comprehensive system set up within it
that allows assembly-time conditionals to be deferred until AFTER the
linker resolves addresses.


Linking, however, is the stage that FOLLOWS assembly.


There may be symbolic algebraic computation involved that the
assembler has to do to resolve these kinds of statements. The
underlying problem is known in the computer science as "constraint
satisfaction".


Most assemblers take the easy way out of these kinds of quandries --
at the expense of the extra documentation telling you where and when
and how the statements are resolved in the ways they are; and when
they can't be resolved; and also at the expense of the inclusion of
extraneous concepts such as "first pass" and "second pass".


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.