Summary: Vectorizing/Parallelizing C Compilers Today?

soohongk@usc.edu (Soohong Kim)
Mon, 8 Mar 1993 21:49:38 GMT

          From comp.compilers

Related articles
Summary: Vectorizing/Parallelizing C Compilers Today? soohongk@usc.edu (1993-03-08)
| List of all articles for this month |
Newsgroups: comp.compilers
From: soohongk@usc.edu (Soohong Kim)
Keywords: parallel, summary, C
Organization: University of Southern California, Los Angeles, CA
Date: Mon, 8 Mar 1993 21:49:38 GMT

I posted a message a couple of weeks ago regarding existing commercial
products and research project status of Vectorizing/Parallelizing C
Compilers. Below is a summary of the responses.


Many people referred to CONVEX, which is the world's first Vectorizing
Compiler, and mpl & ampl. Summary can be categorized as follows.


  (1) Convex
  (2) mpl & ampl (MasPar)
  (3) References for VLIW/Superscalar compilers
  (4) Current researches


Thanks very much to those who responded to my question.


Best regards,


-------
Soohong Peter Kim
Los Angeles, California


========RESPONSES TO Vectorizing/Parallelizing C Compilers Today==========


+++++++++++++++++++++++++++++++++++
+(1) Convex
+++++++++++++++++++++++++++++++++++


--------------------------------------------------------------------------
Date: Sat, 20 Feb 93 08:57:14 -0600
From: metzger@bach.convex.com (Robert Metzger)




CONVEX Computer Corp. had the world's first vectorizing C compiler product
(1986) and the world's first parallelizing C compiler product (1988).


See the June 1992 issue of C Programmer's Journal for an article I wrote
about vectorizing and parallelizing C.


You should also check my article in the proceedings of Supercomputing '91
(Loeliger, Metzger, ... joint authors). It talks about pointer tracking,
and references papers about other vectorizing compilers. My C Users Journal
article is in troff, using a local macroset. I can send you the nroff output
(ASCII).


/Bob


The CONVEX marketing organization can probably supply you with manuals:
CONVEX C Guide
CONVEX C Optimization Guide


--------------------------------------------------------------------------
Date: Mon, 22 Feb 93 08:36:57 -0600
From: weeks@mozart.convex.com (Dennis Weeks)


At Convex we have done these for many years (I believe our first
vectorizing C compiler, in 1987, was the first such product ever
commercially developed). If you are interested I could probably send you
a copy of the Convex C manual.


In addition to Convex, I have worked for MasPar, which has a product
called MPL (stands for "MasPar Programming Language") which is based on
ANSI C with an extra keyword 'plural' to identify objects which reside at
the same memory or register location on all processing elements in the
system. (In this language the compiler does not parallelize your code;
but when you write an arithmetic expression involving one or more 'plural'
objects it will automatically run on all active processing elements
concurrently.) The compiler is a modified version of GCC, and therefore
by the rules of the Free Software Foundation, the source code of the
compiler is available via anonymous ftp. For details you could contact
Ken Jacobsen, kpj@maspar.com


Hope this helps...
DW




--------------------------------------------------------------------------
Date: Mon, 22 Feb 1993 16:07:43 -0600
From: Valparaiso Reject <greanias@uxa.cso.uiuc.edu>


          I'm not sure what resources you have access to, but Convex has a
decent parallelizing/vectorizing compiler. I've used it on their C240
machine (4 processors, 128-depth vector registers). I'm sure they have
such compilers available for their other machines as well. Convex is one
of the best companies around at providing solid software for their
customers, so what you find from them may be near the best around. Hope
this info helps you.




Rob Greanias greanias@uxa.cso.uiuc.edu




--------------------------------------------------------------------------
Date: Tue, 23 Feb 93 10:15:08 +0200
From: jahonen@cs.joensuu.fi (Jarmo Ahonen)


The C-compiler sold for Convex C3 -series (and C2-series) does automatic
parallelization.


It even works, although in some cases it tries to parallelize code
which will run better when only vectorized (or run in purely scalar mode).
--
Jarmo J. Ahonen
Computing Centre, Lappeenranta University of Technology, P.O.Box 20,
SF-53851 Lappeenranta, Finland. email: Jarmo.Ahonen@lut.fi




--------------------------------------------------------------------------
Date: Tue, 23 Feb 93 11:32:44 +0100
From: Hai Xiang Lin <lin@pa.twi.tudelft.nl>


There are vectorizing/parallelizing C compilers on shared-memory
supercomputers such as the Cray's and the Convex's. Though they are less
effective than their Fortran counter parts. I believe one of the reasons
is that pointers in C codes make the dependency analysis difficult (and
sometimes impossible). So, automatic vectorizer/parallelizer has to work
in a consertive way: when it's not sure of the independence between the
operations, it will not vectorize/parallelize. Compiler directives may
help, but still ...


Hai Xiang Lin
lin@pa.twi.tudelft.nl


--------------------------------------------------------------------------
Date: Tue, 23 Feb 93 12:48:41 -0600
From: allison@neptune.convex.com (Brian Allison)


Convex's compilers support automatic vectorization for Fortran, C, and
Ada. In fact, Convex compilers were the *first* commercial compilers to
support automatic vectorization of C (1986) and Ada (1988) as well as
parallelization of C (1988) and Ada (1989).


Regards,
Brian






+++++++++++++++++++++++++++++++++++
+(2) mpl & ampl (MasPar)
+++++++++++++++++++++++++++++++++++


--------------------------------------------------------------------------
Date: Tue, 23 Feb 1993 17:18:01 +1100
From: peter@palin.cc.monash.edu.au (Peter Hawkins)




mpl & ampl - the intrinsic parallel languages for MasPar's machines are C
(ampl is actually a gcc port these days). You can get the source from
marpar.com.


peter
--
Peter Hawkins,
Assistant Lecturer, Computer Centre
Monash University, Australia
peter@palin.cc.monash.edu.au


--------------------------------------------------------------------------
Date: Wed, 24 Feb 1993 12:19:12 +1100
From: peter@palin.cc.monash.edu.au (Peter Hawkins)




MasPar is a USA based company which produces parallel computing arrays.
Most MasPar machines I know of are sold by Digital (DEC) as they and
MasPar are in collaboration. The machines sold by DEC using mpl & ampl are
called the DECmpp series (we have a DECmpp 12000 Sx).


Here is a mail message I got once from one of the people involved with
these languages:


:The MPL 3.0 and 3.1 sources are now available via ftp from
:maspar.maspar.com. The names of the compressed tar files
:are mpl-3.0-tar.Z and mpl-3.1-tar.Z.
:
:I have successfully built these sources on a SPARCstation
:and a MIPS-based DECstation. In theory, these sources should
:compile on a wide range of hosts with little modification.
:
:If you have any problems or questions, please let me know.
:
: Regards,
: Christopher
:
:Christopher Glaeser
:Team One Consulting
:34099 Webfoot Loop
:Fremont, CA 94555
:TEL: +1-510-790-2630
:FAX: +1-510-790-2841
:Email: cdg@team1.com




Hope this helps,
Peter


--------------------------------------------------------------------------
From: levine@MasPar.COM (Larry Levine)
Date: Wed, 24 Feb 93 12:16:43 PDT


ftp to maspar.maspar.com, login as 'anonymous', password 'soohong',
cd pub, and 'get' mpl-3.1-tar.Z, which is a compressed tar file.


Its that easy.


Larry
--
Larry Levine North America 1-800-526-0916
MasPar Customer Support Elsewhere +1-408-736-3300
749 North Mary Ave.
Sunnyvale California, USA For Support Email:
94086 support@maspar.com








+++++++++++++++++++++++++++++++++++
+(3) Refernces for VLIW/Superscalar compilers
+++++++++++++++++++++++++++++++++++


--------------------------------------------------------------------------
Date: Mon, 22 Feb 93 11:54:51 -0800
From: Steve <snovack@enterprise.ICS.UCI.EDU>




      It wasn't clear from your posting what sort of architecture you had in
mind. If you are interested in parallelizing C for VLIW or superscalar
architectures, then you may find some of the following references useful.
I would be interested in receiving a copy of whatever responses you get to
your query (or maybe you could just post it to comp.parallel).


Cheers!
Steve Novack
@INPROCEEDINGS{EbNa89,
AUTHOR = {K. Ebcioglu and T. Nakatani},
TITLE = "A New Compilation Technique for Parallelizing Loops with
                  Unpredictable Branches on a VLIW Architecture",
BOOKTITLE = {Proceedings of the 2nd Workshop on Programming Languages and
                    Compilers for Parallel Computing},
ADDRESS = "Urbana, IL",
YEAR = 1989
}


@INPROCEEDINGS{EbNi89,
AUTHOR = {K. Ebcioglu and A. Nicolau},
TITLE = {A {\it global} resource-constrained parallelization technique},
BOOKTITLE = {Proceedings of ACM SIGARCH ICS-89: International Conference
on Supercomputing},
ADDRESS = {Crete, Greece},
MONTH = {June},
YEAR = 1989
}


@ARTICLE{Fi81,
AUTHOR = {J.~A. Fisher},
TITLE = {Trace Scheduling: A technique for global microcode compaction},
JOURNAL = {IEEE Transactions on Computers},
YEAR = 1981,
NUMBER = 7,
PAGES = {pp. 478-490}
}


@TECHREPORT{MoEb92,
AUTHOR = {S. Moon and K. Ebcioglu},
TITLE = {An Efficient Resource Constrained Global Scheduling Technique
                  for Superscalar and VLIW processors},
INSTITUTION = {IBM},
YEAR = 1992
}


@INPROCEEDINGS{NaEb90,
AUTHOR = {T. Nakatani and K. Ebcioglu},
TITLE = {Using a lookahead window in a compaction-based parallelizing
compiler},
BOOKTITLE = {Proceedings of the 23rd Annual International Symposium on
Microarchitecture},
YEAR = 1990
}


@INPROCEEDINGS{Ni85a,
AUTHOR = {A.~Nicolau},
TITLE = {Uniform Parallelism Exploitation in Ordinary Programs},
BOOKTITLE = {Proceedings of the 1985 International Conference on Parallel
Processing},
YEAR = 1985
}


@INPROCEEDINGS{NiPo90,
AUTHOR = {A.~Nicolau and R.~Potasman},
TITLE = {Realistic Scheduling: Compaction for Pipelined Architectures},
BOOKTITLE = {Proceedings of the 23rd Annual Workshop on Microprogramming},
MONTH = "November",
ADDRESS = "Orlando, FA",
YEAR = 1990
}


@INPROCEEDINGS{NPW91,
AUTHOR = {A.~Nicolau and R.~Potasman and H.~Wang},
TITLE = {Register allocation, renaming and their impact on parallelism},
BOOKTITLE = {Proceedings of the 4th International Workshop on Languages
and Compilers for Parallel Processing}
}


@INPROCEEDINGS{NoNi92,
AUTHOR = {S. Novack and A. Nicolau},
TITLE = {An Efficient Global Resource Constrained Technique for
                  Exploiting Instruction Level Parallelism},
BOOKTITLE = {Proceedings of the 1992 International Conference on Parallel
Processing},
MONTH = "August",
YEAR = 1992,
ADDRESS = "St. Charles, IL"
}






+++++++++++++++++++++++++++++++++++
+(4) Current research projects...
+++++++++++++++++++++++++++++++++++


--------------------------------------------------------------------------
Date: Mon, 22 Feb 1993 15:00:39 -0500
From: c1dje@watson.ibm.com (David Edelsohn)




Carl Kesselman at Caltech is developing one variant of parallel
C++ called CC++. Dennis Gannon at Indiana University is developing a
different variant called pC++. Both are in contact with one another and
with the parallel Fortran projects. There may be a standard along the
lines of HPF for parallel C++ in the near future.


David
--
David Edelsohn T.J. Watson Research Center
c1dje@watson.ibm.com P.O. Box 218
+1 914 945 3011 (Tieline 862) Yorktown Heights, NY 10598








--------------------------------------------------------------------------
Date: Tue, 23 Feb 93 09:20:05 +0100
From: Jean Louis Pazat <Jean-Louis.Pazat@irisa.fr>


There are many projects about "parallelizing compilers" for distributed
memory machines, many of them are about Fortran but the same techniques
can be applied to a subset of C.


There are two kinds of compilers: Data distribution directed compilers
(DDD): FortranD, Pandore, Vienna Fortran Compilation System, other HPF
compilers. Code distribution directed compilers: (CDD)


DDD Compilers parallelize and distribute sequential code according to a
data distribution specified by the user; CDD compilers try to distribute
independent iterations among processors, most of them use a global memory;
but this memory can be a virtual memory (see KOAN for example).


At IRISA, we are working on a DDD Compiler PANDORE II the source language
is a subset of C with features to handle data distribution. See [ICS90]
for a short description of the first version of our compiler, and
[PANDOREII-92] for the actual prototype.


@InProceedings{ICS90,
    author = "Fran{\c c}oise Andr\'{e} and Jean-Louis Pazat and
                                  Henry Thomas",
    title = "{Pandore: A System to Manage Data Distribution}",
    booktitle = "International Conference on Supercomputing",
    year = "1990",
    organization = "ACM",
    month = "June 11-15"
}
@TechReport{PANDOREII-92,
    author = "Fran{\c c}oise Andr\'{e} and Olivier Ch\'{e}ron and
                                  Jean-Louis Pazat",
    title = "Compiling Sequential Programs for Distributed Memory
                                  Parallel Computers with Pandore II",
    institution = "IRISA/INRIA",
    year = "1992",
    number = "651",
    month = "April"
}


Jean-Louis PAZAT.
---
Jean-Louis PAZAT Phone: 99 84 72 14
IRISA - INSA Fax: 99 38 38 32
Campus Universitaire de Beaulieu Telex: UNIRISA 950 473F
35042 RENNES CEDEX FRANCE E-mail: pazat@irisa.fr








--------------------------------------------------------------------------
Date: Tue, 23 Feb 1993 12:09:01 -0500
From: Rahul Bhargava <rahul@npac.syr.edu>




Here at NPAC we are currently finishing up work on the fortran90D
compiler. I think next we might initiate work on a C compiler. Currently
nothing has been done. But plans are afoot. If u get more info please pass
it along.


Thanx RB
--
                                                                      Rahul Bhargava
                                          (315) 443-4636 office. (315) 475-3545 home.
                                    rahul@npac.syr.edu / rbhargav@rodan.acs.syr.edu
                                          126 Remington Ave, # G, Syracuse, NY 13210


--------------------------------------------------------------------------
Date: Tue, 23 Feb 93 15:10:38 -0500
From: Rakesh Ghiya <ghiya@bluebeard.cs.mcgill.ca>




  We have an ongoing effort at McGill in this area.
  The design and implementation of our McCAT parallelizing
  C compiler is underway.




From: Rakesh GHIYA <ghiya@binkley.cs.mcgill.ca>
Date: Sat, 27 Feb 1993 22:59:16 -0500 (EST)




A good introduction to the McCAT effort at McGill is :


Designing the McCAT Compiler Based on a Family of Structured Intermediate
Representations. Proceedings of the 5th Workshop on Languages and
Compilers for Parallel Computing, August 1992 ( Appeared as LNCS)


Authors : Laurie Hendren et al...


Our compiler is still in the analysis stage .. we have not started
working on loop transformations so far.


For more details , you may contact my advisor Prof Laurie Hendren..


hendren@lucy.cs.mcgill.ca


  Regards,


  Rakesh Ghiya
  School of Computer Science
  McGill University
  Montreal, Canada.
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.