Re: Parallelizing (WAS: Death by pointers.)

Stefan Monnier <stefan.monnier@epfl.ch>
Tue, 3 Oct 1995 09:33:44 GMT

          From comp.compilers

Related articles
Death by pointers. (Was: order of argument evaluation in C++, etc.) johnston@imec.be (1995-08-30)
Parallelizing (WAS: Death by pointers.) pardo@cs.washington.edu (1995-09-24)
Re: Parallelizing (WAS: Death by pointers.) ECE@dwaf-hri.pwv.gov.za (John Carter) (1995-09-29)
Re: Parallelizing (WAS: Death by pointers.) preston@tera.com (1995-09-29)
Re: Parallelizing (WAS: Death by pointers.) creedy@mitre.org (1995-10-02)
Re: Parallelizing (WAS: Death by pointers.) stefan.monnier@epfl.ch (Stefan Monnier) (1995-10-03)
Re: Parallelizing (WAS: Death by pointers.) imp@village.org (Warner Losh) (1995-10-11)
Re: Parallelizing (WAS: Death by pointers.) Martin.Jourdan@inria.fr (1995-10-18)
Re: Parallelizing (WAS: Death by pointers.) blume@zayin.cs.princeton.edu (1995-10-23)
Re: Parallelizing (WAS: Death by pointers.) wclodius@lanl.gov (1995-10-28)
Re: Parallelizing (WAS: Death by pointers.) cliffc@ami.sps.mot.com (1995-11-03)
Re: Parallelizing (WAS: Death by pointers.) chase@centerline.com (1995-11-06)
[7 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: Stefan Monnier <stefan.monnier@epfl.ch>
Keywords: parallel
Organization: Ecole Polytechnique Federale de Lausanne
References: 95-09-030 95-09-145 95-10-024
Date: Tue, 3 Oct 1995 09:33:44 GMT

John Carter <ECE@dwaf-hri.pwv.gov.za> wrote:
] Stefan Monnier and David Keppel(Pardo) seem to have missed my point
] about simple parallelism. I initially agreed wholeheartedly with


Well, it looks like you missed my point too :-)
My point was that parallelism might be great, but it's not a reason not to try
to get the most of single-processor performance. If you want to go fast, you
need many processors that go fast.


In the example you were replying to, someone was pointing out a case
where C's wild aliasing was preventing the compiler from writing
efficient code. Compared to the Fortran version, the code was three
times slower. So, even if you parallelise the code to get a three
times increase in speed, Fortran will still be three times faster.
(back to "death by pointers").


] haven't seen such a site. Whenever I say "ps -aux" I get a list as
] long as my arm. Whenever I do a "make", I usually run the compiler
] on 10 or more different files...


Most processes are idle, which doesn't help use parallelism.
Furthermore (we're still in comp.compilers, right?) it has already
several times been pointed out that most of the time spent in make is
spent recompiling a few files and then linking and linking often
takes half the time: a parallel make might give you a speedup of 2,
but not more.


] etc. I heartily wish pipeline optimization would go away and let the
] compiler writers head for the real issues.


Of course, simplicity is desirable, but what drives the computer
industry is speed and if you can go faster with a complex pipeline,
you're very likely to see complex pipelines. Multithreaded processors
are closer to what you're looking for: pipeline troubles disappear,
but then you have to find enough threads to keep your processor busy.
Automatic parallelisation is no necessarily easy. Furthermore,
dealing with a complex pipeline is not too hard either.




Stefan
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.