Re: What ideas are better for assigning registers to terminals?

Ben Franchuk <>
11 Oct 1999 02:34:25 -0400

          From comp.compilers

Related articles
What ideas are better for assigning registers to terminals? (Bill A.) (1999-09-16)
Re: What ideas are better for assigning registers to terminals? (Peter Bergner) (1999-10-06)
Re: What ideas are better for assigning registers to terminals? (1999-10-06)
Re: What ideas are better for assigning registers to terminals? (Ben Franchuk) (1999-10-11)
Re: What ideas are better for assigning registers to terminals? (Max Hailperin) (1999-10-11)
| List of all articles for this month |

From: Ben Franchuk <>
Newsgroups: comp.compilers
Date: 11 Oct 1999 02:34:25 -0400
Organization: OA Internet
References: 99-09-060 99-10-032 99-10-037
Keywords: registers, optimize

Andreas Krall wrote:

> Conservative coalescing gives worse results than Chaitins original
> agressive coalescing. If a conflict graph becomes uncolorable due
> to agressive coalescing live range splitting can be done between
> coalesced registers. There is a very nice paper on this topic:

That seems a lot of work if the diffence is only a few %.

How about designing CPU's with FEWER registers,your cpu speed may be
10%? slower on some small functions but then you work 50% faster by
not having to save stuff in the registers to reach terminal nodes.

I do some C programing and some 86/386 asm programing. The RISC idea
is not the way to go,nor is CISC computers.

All computers 1) Fetch instruction and constant values.
2) Decode instruction.
3) Calculate effective addresses of data
4) Fetch the data
5) operate on the data
6) Store the data

RISC machines try to speed things up having each step 1 thru 6 as
faster register to register operation with a increase in instruction
fetches. Fast dumb operations. CISC machines try to make each step
more powerful. Slow but heavy duty operations. As you add more
complexity to each design and more registers the internal controller
for the processor is becoming more and a computer in its own right and
is limited by its data flow thru it's internal and external
bussing. Sooner or later the controller gets its internal controller
and the design cycle begins again.

Having fewer registers means only very near terminals get assigned and
only short distance away. Effective address, integer math, floating
point math,and loop counters are good things to keep in mind with data
flow assignments but are we all ready past the point of no return for
our register investment. Every context switch requires a complete
state save,or knowledge of what registers are used in a external
routine. More complexity to a compiler. The classic C compiler with
use of "register" still looks the cleanest way to me of register
assignments Registers eat up opcode instruction space too.

The safe speed limit is 55MPH or 100kmh, any faster and you risk not
getting where you are going in your auto. Are we past the safe speed
limit of optimization for computers?

For some very rough idea's on what I consider is a good cpu design
check here.

1) Have only the registers need for a clean design about 8 registers
seems right.

2) Have direct memory only as large as about the square root of the
address space.

3) All array and structures be assigned a memory block out side of
direct memory space and accessed only va a pointer in the direct
memory segment. This means that int foo[100] all ways gets translated
to int * foo = { &_storage+n }.

Compilers would sort the data into the two groups, simple data and
complex data, for both static and local stack variables.

4) Keep the data flow going from or to memory with effective address
helper instructions, like INC R,(efa) T=(efa),(efa)+= R,R=T a general
increment memory instruction.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.