|how do a RISC compiler translate an array initialization ? Sid-Ahmed-Ali.TOUATI@inria.fr (Sid Ahmed Ali TOUATI) (1999-08-02)|
|Re: measuring cached writes, was how do a RISC compiler ... firstname.lastname@example.org (Achim Gratz) (1999-08-08)|
|From:||Achim Gratz <email@example.com>|
|Date:||8 Aug 1999 11:42:41 -0400|
|Organization:||Institute of Computer Engineering, CS department, TU Dresden, Germany|
Sid Ahmed Ali TOUATI <Sid-Ahmed-Ali.TOUATI@inria.fr> writes:
> I thought that this is equivalent to access every X element in each
> iteration, which yield to some cache miss equal to 50 in an ultra
> sparc II.
Nope, stores are non-allocating on miss for UltraSPARC. Also, two of
them can be collapsed to one in the store buffer if they go to the
same cache sub-block.
> So: 1. is there a mechanism to store a constant value in a large
> data segment ? the code generated seems to contain classic stores
> ("st" instructions).
> 2. Do the compiler bypass the data cache when it generate an array
> initialisation ? this is possible on some processors like the ultra
No. The hardware does. Using the BLD instruction you can also bypass
the cache on load, but no compiler does that yet.
> Remark: when I replace the instruction with x(i)=x(i)+1, a
> "relatively" correct number of cache misses is reported (42 or 43
> misses are reported). It proove that the compilation process is not
> the same for the first and the last version of the code despite the
> fact that I turned off all optimization options.
No it does not prove that. It proves that the compiler generates code
that first reads (generating the misses you see) and then stores the
-- +<[ It's the small pleasures that make life so miserable. ]>+ --
Return to the
Search the comp.compilers archives again.