Related articles |
---|
Reuse of computations citron@CS.HUJI.AC.IL (1995-09-04) |
Re: Reuse of computations snovack@justright.ICS.UCI.EDU (Steven Novack) (1995-09-09) |
Re: Reuse of computations pardo@cs.washington.edu (1995-09-11) |
Re: Reuse of computations chase@centerline.com (1995-09-12) |
Re: Reuse of computations davids@ICSI.Berkeley.EDU (1995-09-18) |
Newsgroups: | comp.compilers |
From: | Steven Novack <snovack@justright.ICS.UCI.EDU> |
Keywords: | optimize |
Organization: | UC Irvine Department of ICS |
References: | 95-09-053 |
Date: | Sat, 9 Sep 1995 22:01:58 GMT |
citron@CS.HUJI.AC.IL (Daniel Citron) writes:
>I'm interested in papers that refer to the reuse of previously executed
>computations. I think the term is memoing or memoizing. Are there any
>practical >implementations of this concept (in software or hardware)?
I'm not sure if this is the type of thing you are looking for or not,
but the following reference presents a register allocation technique
that relies on the ability re-compute a value based on previously
executed computations that are still in the register file (as opposed
to using a spill/reload sequence):
@inproceedings{Briggs92,
AUTHOR = {P. Briggs and K. Cooper and L. Torczon},
BOOKTITLE = pldi,
TITLE = {Rematerialization},
YEAR = {1992}
}
Alex Nicolau and I also published a paper last year on integrating
code selection, register allocation, and instruction scheduling into
a unified framework, that makes even more aggressive use of
previously computed values. I've included a reference and the
abstract below, if you are interested. The full text of the paper
can be found in Postscript format at the URL:
http://www.ics.uci.edu/~snovack/papers/ms/ms.ps
Cheers,
Steve
@incollection{Novack94,
AUTHOR = {S. Novack and A. Nicolau},
BOOKTITLE = {Languages and Compilers for Parallel Computing},
SERIES = {Lecture Notes in Computer Science},
VOLUME = {892},
PUBLISHER = {Springer-Verlag},
TITLE = {Mutation Scheduling: A Unified Approach to Compiling
for Fine-Grain Parallelism},
ABSTRACT = {
Trade-offs between code selection, register allocation, and instruction
scheduling are inherent to compiling for fine-grain parallel
architectures, but the conventional approach to compiling for such
machines arbitrarily separates these phases so that decisions made
during any one phase place unnecessary constraints on the remaining
phases. Mutation Scheduling attempts to solve this problem by
combining code selection, register allocation, and instruction
scheduling into a unified framework in which trade-offs between the
functional, register, and memory bandwidth resources of the target
architecture are made ``on the fly'' in response to changing resource
constraints and availability.
},
YEAR = {1994}
}
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.