Related articles |
---|
barrier synchronization yschen@ee.ntu.edu.tw (1994-11-16) |
Re: barrier synchronization newburn@aslan.ece.cmu.edu (1994-11-18) |
Re: barrier synchronization rrogers@cs.washington.edu (1994-11-18) |
Newsgroups: | comp.compilers |
From: | rrogers@cs.washington.edu (Richard Rogers) |
Keywords: | parallel |
Organization: | Computer Science & Engineering, U. of Washington, Seattle |
References: | 94-11-114 |
Date: | Fri, 18 Nov 1994 20:58:02 GMT |
yschen@ee.ntu.edu.tw (Yeong-Sheng Chen) writes:
> Does anyone know that which machine (or what kind of architecture)
>can efficiently execute the following codes with barrier synchronizations.
>(For example, machines with VLIW architecture, or others?)
>
> DoAll 200 I=1,N
> DoSequential 100 J=1,M
> loop body;
> 100 Continue
> Barrier;
> 200 Continue
The MasPars (Massively parallel SIMD, 2D mesh of processors with toroidal sub-
buses, hypercube router, and global or reduction) are pretty good at that sort
of thing.
In MPL, your code would look something like this:
/* plural */ int n;
plural int i,j, m;
for (i=1; i <= n; i=i+1) {
for (j=1; j <= m; j=j+1) {
loop body;
}
}
I'm not sure whether you mean for the outer loop to be executed the same
number of iterations by each processor, or for processor to its own value
for n. In the first case, n should be declared as a singular variable. For
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.