Re: Ada compilers for parallel platforms

Kim Whitelaw <kim@jrs.com>
29 Apr 1996 23:12:32 -0400

          From comp.compilers

Related articles
Ada compilers for parallel platforms sam818@seas.gwu.edu (1996-04-16)
Re: Ada compilers for parallel platforms rmeenaks@bfm.com (1996-04-19)
Re: Ada compilers for parallel platforms kim@jrs.com (Kim Whitelaw) (1996-04-29)
| List of all articles for this month |

From: Kim Whitelaw <kim@jrs.com>
Newsgroups: comp.compilers,comp.lang.ada
Date: 29 Apr 1996 23:12:32 -0400
Organization: JRS Research Labs
References: 96-04-091
Keywords: Ada, parallel

Samir N. Muhammad wrote:
>
> I am interested in Ada compilers for parallel machines. Does anybody
> know whether Ada was able of entering the mainstream of parallel
> programming? Specifically speaking, has there been an implementation
> of Ada to run on a parallel platform and exploit parallelism at
> different levels(not only task levels). Your help will be highly
> appreciated.


We retargeted an Ada to Microcode compiler to a SIMD 8 by 8 Systolic
Cellular Array Processor. We handled the parallism by defining
planar-types to represent 8 by 8 arrays of scalar types with one
element per processor. The arithmetic operators were overloaded to
work with the planar types, e.g. A*B represents 64 multiplications
(one per processor) if either A or B are planar types.


For example, givin the declarations A : planar_real; B : planar_real;
    where(A>B); B:=B*2.0; endwhere;


would double B[i] in those processors-i for which A[i] is greater than
B[i]. Because planar operations were used, the test and multiply
would be done simultaneously in all 64 processors.


All of the planar operations (both arithmetic, selection and
systolic-movement) were implemented as builtin functions that
translate to inline-microcode during compilation. The optimizer
overlapped data-independent operations, so simultaneous add, multiply,
and data movement could be achieved. For a carefully written 20-tap
FIR-filter, we achieved an effective thruput of 2.6 gigaflops, which
was close to the peak 3.2 gigaflops for the processor.


Because Ada allows data-abstraction thru the use of packages,
overloaded operators, and generic procedures; we were able to develop
an Ada package that allowed parallel algorithms to be developed and
tested using a conventional Ada compiler, and then cross-compiled to
the Systolic Cellular Array Processor. This would have been
impossible in C (unless a preprocessor is used) since overloaded
operators are not supported. It would have been difficult in C++ since
operators can only be overloaded for classes, which are hard to
generate 0-overhead code for. Ada allows distinct scalar types to be
created using the "new" keyword with full support for operator
overloading. For example, you can use "Type planar_integer is new
integer;" and then redefine "+", "-", "*", ... on the type
planar_integer to map to different microcode. In the host-model
designed for emulating the parallel operations, you define "Type
planar_integer is array(0..7,0..7) of integer", and overload the
operators on this type.


Note, the planar data type enhancements were supported using standard
Ada 83; no extensions to the language were needed.


Hope this helps,


Kim Whitelaw
JRS Research Labs
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.