|prefetching (was:Re: Future of architecture) email@example.com (1995-11-10)|
|Re: prefetching (was:Re: Future of architecture) eanders@ayer.CS.Berkeley.EDU (1995-11-17)|
|From:||eanders@ayer.CS.Berkeley.EDU (Eric Arnold Anderson)|
|Organization:||University of California, Berkeley|
|References:||<firstname.lastname@example.org> <email@example.com> 95-11-097|
|Date:||Fri, 17 Nov 1995 05:45:13 GMT|
Mark Smotherman <firstname.lastname@example.org> wrote:
> [ List traversal with prefetch pointers and circular buffers ]
I've also looked at this for a class project.
<URL = http://http.cs.berkeley.edu/~eanders/264>
The summary result is that machine like the Alpha, with a very fast processor,
and a <relative to processor speed> slow memory hierarchy benefit the
most from memory optimizations. I tried merging together adjacent list
elements in my implementation (i.e. [a]->[b]->[c]->\0 became
[a,b]->[c,d]->\0) for many different unroll lengths.
The speedups ranged from 1-4 across many simple list operations
(add1,reverse,append,copy, etc.); For a more "real" application like
merge-sort, I measured speedups of around 1.7 on the Alpha 21064,
1.1 on the SuperSparc, and 1.0 (i.e. no real effect) on the HP PA RISC
for sorting random lists; sorting backward lists resulted in better
speedups (2.1,1.4,1.2 respectively)
Presumably the effect will become more pronounced on machines with
deeper memory hierarchies.
Return to the
Search the comp.compilers archives again.