Re: 8-bit processor specific techniques

BGB <cr88192@hotmail.com>
Sun, 27 Sep 2015 17:34:38 -0500

          From comp.compilers

Related articles
8-bit processor specific techniques laguest9000@googlemail.com (lucretia9@lycos.co.uk) (2015-09-27)
Re: 8-bit processor specific techniques cr88192@hotmail.com (BGB) (2015-09-27)
Re: 8-bit processor specific techniques walter@bytecraft.com (Walter Banks) (2015-09-28)
Re: 8-bit processor specific techniques laguest9000@googlemail.com (lucretia9@lycos.co.uk) (2015-09-29)
Re: 8-bit processor specific techniques cr88192@hotmail.com (BGB) (2015-09-29)
| List of all articles for this month |
From: BGB <cr88192@hotmail.com>
Newsgroups: comp.compilers
Date: Sun, 27 Sep 2015 17:34:38 -0500
Organization: albasani.net
References: 15-09-019
Keywords: code, history
Posted-Date: 27 Sep 2015 20:17:44 EDT

On 9/27/2015 9:30 AM, lucretia9@lycos.co.uk wrote:
> I've been looking around for anything related to compiler development for
> 8-bit processors.


...


> [There are plenty of compilers for 8 bit machines, particulary at
> retrocomputing sites, but I don't recall a lot of interesting code
> generation stuff. They tend to have so few registers and be so
> irregular that little of the optimization stuff intended for code
> generation applies. -John]


I am not sure what the best approach is, but yeah, I have doubts how
well SSA would apply to this use-case.


a lot probably depends on the specific ISA, for example, what makes
sense for Z80 or 6502 may be rather different than for MSP430 or AVR8.




if I were to make a guess, it might make more sense for a lot of targets
to represent the code initially more as a sort of stack machine
potentially with a lot of compound operations, and then try to run a
variation of LZW over this (to build a dictionary of repeating
patterns). the dictionary would retain any sequences longer than a
certain minimum length (to offset the call/return overhead).


the assumption would be to try to make the stack-machine sufficiently
context independent that the same sequence of instructions will have the
same behavior independent of caller. this means that if stack-relative
addressing is used, the offsets of variables would be resolved in the
stack IR (so that they are also constant in the output code).


likely, a fairly minimalist register allocator would be used (if one is
used at all), and would be mostly for caching the top stack items and
maybe a few variables.


likely, the main goal is mostly to minimize code size, as IME this tends
to be a bigger factor than the execution time when it comes to small
(8/16 bit) targets. in a way, this makes it closer to a data-compression
problem than a traditional optimization problem.


typically there is also a need for handling operations via internal
function calls, as things like a hardware multiplication and multi-bit
shifts tend to be absent. for example, one may see trickery like
implementing multi-bit shifts via computed jumps into a sequence of
single-bit shifts (the ISA in question could use PC as a GPR, and
arithmetic on PC was done rather often), ...


but, granted, I haven't done much personally in this area as of yet, and
haven't really looked that much into how the existing compilers do it
(at least much beyond their ability to somehow fit around 500 lines of C
code into a 2kB ROM).


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.