Fri, 20 Oct 1995 16:16:44 GMT

Related articles |
---|

Optimal code generation for 1/2/3 address instructions SRK106@PSUVM.PSU.EDU (Senthil Kumar) (1995-10-20) |

Newsgroups: | comp.compilers |

From: | Senthil Kumar <SRK106@PSUVM.PSU.EDU> |

Keywords: | code, optimize, question |

Organization: | Penn State University |

Date: | Fri, 20 Oct 1995 16:16:44 GMT |

I understand from [Brun76], [Aho77] that optimal code

generation for arbitrary expressions is NP-complete for one

address and two address instructions even if the number of

available registers is infinite.

Is this correct?

[Brun76] Bruno and Sethi, Code generation for a one-register machine.

[Aho77] Aho, Johnson and Ullamn, Code generation for expressions with

common subexpression.

In [Aho77] they mention that optimal code generation for

three address instructions, those of the from

r[i] <-- r[j] OP r[k]

is linear for arbitrary dags (sub-expressions

etc), *if* the number of available registers is infinite.

The approach is to simply evaluate the dag bottom up,

level by level, assigning a distinct register to each node.

I was wondering, if this is the reason that machines have

a three address instruction schema? Sparc, for eg.

i.e., using a three address schema, makes the problem a little

easier, in that we know we can produce optimal code if we have

infinite registers. So the problem is reduced to that of minimizing the

register needs for some expression?

Further, I assume this simplies the problem of code

scheduling (pipeline etc), which can be thought of as a superset

of the problem of producing optimal code under no pipleline constraints.

I would appreciate some insights, pointers, further reading etc

that some of you might have on what I have had to say.

thanks,

Senthil

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.