|GPU-aware compiling? firstname.lastname@example.org (Tomasz Chmielewski) (2005-05-20)|
|Re: GPU-aware compiling? email@example.com (Michael Tiomkin) (2005-05-22)|
|Re: GPU-aware compiling? firstname.lastname@example.org (Oleg V.Boguslavsky) (2005-05-22)|
|Re: GPU-aware compiling? email@example.com (firstname.lastname@example.org) (2005-05-24)|
|Re: GPU-aware compiling? email@example.com (Rob Dimond) (2005-05-24)|
|Re: GPU-aware compiling? firstname.lastname@example.org (2005-05-24)|
|Re: GPU-aware compiling? email@example.com (Ray Dillinger) (2005-06-26)|
|Re: GPU-aware compiling? firstname.lastname@example.org (Julian Stecklina) (2005-07-02)|
|From:||Ray Dillinger <email@example.com>|
|Date:||26 Jun 2005 11:19:27 -0400|
Rob Dimond wrote:
> Modern GPU's are SIMD processors that execute the same program on
> multiple data-elements (e.g. vertices or pixels) in parallel.
I've been looking at GPU's with a hungry eye lately, because I use
Artificial Neural Networks to solve some problems. The memory access
pattern is simple and predictable and amenable to the way GPU's do it,
and the weight calculations look a lot like vertex shading, and of
course the blocks of floats are the ideal form for my data to take in
And although I haven't been able to work out the kinks and get stuff
to run directly on them, I'm convinced that the kinks are workable-out
and that the DSP's in the graphics card can speed up this kind of work
by an order of magnitude or more with adequate compiler support.
Return to the
Search the comp.compilers archives again.