|GPU-aware compiling? firstname.lastname@example.org (Tomasz Chmielewski) (2005-05-20)|
|Re: GPU-aware compiling? email@example.com (Michael Tiomkin) (2005-05-22)|
|Re: GPU-aware compiling? firstname.lastname@example.org (Oleg V.Boguslavsky) (2005-05-22)|
|Re: GPU-aware compiling? email@example.com (firstname.lastname@example.org) (2005-05-24)|
|Re: GPU-aware compiling? email@example.com (Rob Dimond) (2005-05-24)|
|Re: GPU-aware compiling? firstname.lastname@example.org (2005-05-24)|
|Re: GPU-aware compiling? email@example.com (Ray Dillinger) (2005-06-26)|
|Re: GPU-aware compiling? firstname.lastname@example.org (Julian Stecklina) (2005-07-02)|
|Date:||24 May 2005 10:16:05 -0400|
|Posted-Date:||24 May 2005 10:16:05 EDT|
I've been dabbling around in this area and I would emphatically say
"No". You're not going to see those options show up any time soon.
What is GPGPU computing? It's essentially hijacking the rendering
pipeline at the shading/texture mapping phase and rendering back to a
texture. A texture is a 2D array - so you're running a program on one
2D array and outputting to another 2D array. 3D is a stack of 2D
There are two reasons why you're not going to see these options show
up. First, the instruction set is __very__ specialized to support
common graphic operations, like dot products. Not too hard to
recognize, but probably want to take the instrinsic or builtin approach
to implementing certain operations. This is why nVidia built the Cg
language for implementing algorithms. Second, GPUs have a very
specialized serial memory access pattern, making array operations on
random elements far more challenging. So, if the algorithm in question
doesn't follow the memory access pattern the GPU is expecting, your
compiler will have to coerce or map the code it's got to the GPU
target. This is one reason why it's hard to do things like Runge-Kutta
on GPUs whereas implicit numerical methods are preferred.
Hope this helps.
Return to the
Search the comp.compilers archives again.