Related articles |
---|
Irregular data access/data structures/... in parallelizing compiler hzmonte@hotmail.com (2003-05-05) |
Re: Irregular data access/data structures/... in parallelizing com matt@peakfive.com (Matt Rosing) (2003-05-16) |
From: | Matt Rosing <matt@peakfive.com> |
Newsgroups: | comp.compilers |
Date: | 16 May 2003 21:09:11 -0400 |
Organization: | AT&T Broadband |
References: | 03-05-010 |
Keywords: | parallel, optimize |
Posted-Date: | 16 May 2003 21:09:11 EDT |
I don't think there is definition of regular. My definition is how
easy it is to balance the data structure across a parallel machine.
If the data is just an array then it's easy to figure out how to
distribute, can be done at compile time, and is therefore regular. If
it's a graph then the distribution has to be done at run-time and is
therefore irregular. This assumes that the amount of work done for
each element is roughly the same for all elements of the data
structure.
A[B[i]] is the normal way to describe a graph in fortran and the way
to loop over all elements of the graph is to run i from 1 to number of
nodes. That loop can't be parallelized unless the compiler knows that
each B[i] is unique. This can't be determined at compile time but is
usually the case, so directives are usually used to tell the compiler
that there are no loop dependencies.
Hope this helps,
Matt
hzmonte wrote:
>I am a beginner. When I read literature in parallelizing compiler
>design, I often come across terms like "irregular data access",
>"irregular data structures", "irregular computations", etc. I tried
>to find the meaning in several textbooks to no avail. Do they refer
>to the same thing? What do they mean?
>I read that in cases involving A[B[i]] and A[B[j]] where A and B are
>arrays, it is impossible to determine at compile-time whether they
>refer to the same memory location and thus data dependence exists.
>Does this have anything to do with the above "irregularity"?
Return to the
comp.compilers page.
Search the
comp.compilers archives again.