Re: Parallelizing C/C++ code

kamal <kamalpr@gmail.com>
Thu, 27 May 2010 23:48:36 -0700 (PDT)

          From comp.compilers

Related articles
Parallelizing C/C++ code rfonteboa@gmail.com (Raphael Fonte Boa) (2010-04-29)
Re: Parallelizing C/C++ code kym@sdf.lonestar.org (russell kym horsell) (2010-05-01)
Re: Parallelizing C/C++ code rfonteboa@gmail.com (Raphael Fonte Boa) (2010-05-03)
Re: Parallelizing C/C++ code kym@sdf.lonestar.org (russell kym horsell) (2010-05-06)
Re: Parallelizing C/C++ code gneuner2@comcast.net (George Neuner) (2010-05-07)
Re: Parallelizing C/C++ code joe@burgershack.org (Randy Crawford) (2010-05-14)
Re: Parallelizing C/C++ code kamalpr@gmail.com (kamal) (2010-05-27)
| List of all articles for this month |
From: kamal <kamalpr@gmail.com>
Newsgroups: comp.compilers
Date: Thu, 27 May 2010 23:48:36 -0700 (PDT)
Organization: Compilers Central
References: 10-04-071 10-05-002 10-05-022 10-05-083
Keywords: parallel
Posted-Date: 29 May 2010 00:16:29 EDT

On May 15, 7:06 am, Randy Crawford <j...@burgershack.org> wrote:
> On Mon, 03 May 2010 04:09:41 -0700, Raphael Fonte Boa wrote:
> > On May 1, 7:00 am, russell kym horsell <k...@sdf.lonestar.org> wrote:
> >> According to google there are 4.4 million hits for "parallelizing
> >> compiler". :) So you know it's a huge area. Perhaps start with
> >> wikipedia for an introduction.
>
> > Hi Russel,
> > Thanks for googling it for me :)
>
> > Nevertheless, I think the problem for me lies more in the analysis area.
> > Are the analysis for parallelism worth the effort for compilers
> > technology? Googling for a tool that accomplishes such parallelization
> > gives no result. I therefore imagine that it has no simple solution.


It's worth the effort if you can achieve a speed up without a penalty
and you have free time t invest -as far as any and all forms of static
optimization go. It gets tricky when there is dynamic optimization
involved. That is when you need to evaluate whether the cost/overhead
of dynamic optimization exceeds the possible benefits.


> > Since I'm no compilers researcher I thought of asking here, maybe I
> > could get an answer like:
> > It has not been adopted by most compilers because it is not worth it,
> > does not bring benefits, too complex or whatever.


Because Mainstream Multi-Core Architectures Were Not Present Till
Recently. Even today, multi-core processors have turbo-boost feature
which will increase the clock speed of a single cpu in the event all
others are idle. That aside -C code that was written years back
(dusty deck) is notoriosuly difficult to paralelize because of the use
of pointers. Runtime detection of dependence eats into performance and
cannot be easily achieved.


> > I know that it is a very broad area, and I don't think that some google
> > searches will give me something I can use to answer my question.
>
> The best evidence for whether a compiler feature is worth implementing
> is whether it sells. Does the marketplace demand it? Does it make
> one compiler outsell another? I've seen compilers that implicitly


yes -the market depends it or rather it demands higher performance but
given constraints in doubling clock speed every 2 yrs as Moore
predicted, processor manufacturers are trying to add more cores at a
lower frequency and achieve the same. To handle multiple cores -you
need to find ways to deploy your code on all cores and not just one.
Intel and HP rely on compiler optimization to achieve this, whereas
IBM (In its POWER series) relies on the processor hardware to achieve
the same.


> split loops across multiple cores and shared memory, but parallelism
> more ambitious than that has been left to the programmer.


It is best left to the programmer cause static information is not
sufficient to predict a lot of things. HP's C compiler has a feature
called profiling wherein a program's execution path is used to predict
branches that will be tsken etc. and thus improve performance.


> Some of Intel's compilers attempt more paralleliztion than most because
> the Itanium is so much more dependent on the compiler for hiding


itanium is dependent on the compiler to lay out data & instructions as
it wants executed/accessed.


> latency. But when HP finally pulls the plug on the IA64 instruction set,


so what will they move onto from Itanium? btw-are we discussing
corporate plans or optimizations?


> I doubt they'll continue to invest as heavily in compiler-driven


they invested heavily into optimizations and numerous other
technologies -but the processor didn't sell well. Why do you think
their optimizations cannot be ported onto xeon?


> concurrency. There was a time when Cray too bought into autoparallelism,
> but with the company's recent spate of insolvencies and buyouts, the
> writing is on the wall for HPC in general -- it doesn't sell either.
>
HPC involves many other things like deploying applications across a
server farm -and is not restricted or linked to compiler optimizations
on a single system.


> With the the advent of larger-than-4x core CPUs or GPUs, will compiler
> vendors implement novel forms of parallelism beyond loop splits or SSE
> instructions? I doubt it. I've seldom seen cases where more than 8 SMP
> cores can add further speedup to task auto-parallelism. Further speedup
> requires explicit programmer intervention. Given the ubiquity of data


NVidia has CUDA -which supposedly helps take advantage of multi-core
architecture of their CPUs.


regards
-kamal


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.