|From:||George Neuner <firstname.lastname@example.org>|
|Date:||Fri, 16 Jul 2021 16:22:34 -0400|
|Organization:||A noiseless patient Spider|
|References:||21-07-004 21-07-006 21-07-014|
|Injection-Info:||gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="69410"; mail-complaints-to="email@example.com"|
|Posted-Date:||16 Jul 2021 16:41:09 EDT|
On Thu, 15 Jul 2021 23:49:55 -0700 (PDT), gah4 <firstname.lastname@example.org>
>There is pretty much a continuous change, as processors get faster,
>then less efficient processing makes more sense.
>Among others, less efficient, interpreted languages have become
>It is interesting, though. For much of the 1990's, faster and faster
>processor became available for compute intensive applications like
>computational physics, but mostly driven by demand from other uses.
>Some of that was people who bought faster processors because they
>could, and some by gaming. For the most part, processors haven't been
>built for compute intensive use from about the 1990's.
>In the 1980's, there were some coprocessor to speed up compute intensive
>problems, such as FPS (Floating Point Systems). But as desktop computers,
>and especially x86 machines, got faster there was less need for them.
>And then GPUs to speed up graphics, mostly for games, but then compute
>intensive users found that they could use them, too. Except that most are only
But processors /aren't/ getting faster (much) anymore - they're near
the limits both of feature size reduction and of ability to dissipate
The wires and insulators now are just a few atoms thick, and since
there are insulators /inside/ transistors, the transistors themselves
can't get much smaller [they can change shape, which is how things are
Modern CPUs live in a perpetual state of "rolling blackout" in which
functional units are turned on and off, cycle by cycle, as needed.
This is /NOT/ done for "green" minded energy conservation [that's just
self serving PR by the manufacturers] - it's /necessary/ to prevent
the chips from burning up.
And GPUs are /very/ slow relative to CPUs. The only reason they seem
to perform well is because the problems they target are embarrassingly
parallel. Try solving a problem that requires lots of array reduction
steps and you'll see your performance go straight into the toilet.
[Yes, I know that there are tree methods for parallelizing reductions
... they are not always straightforward to implement, and they only
work for /some/ reduction problems.]
I have worked with Connection Machines (CM-2), DSPs, FPGAs, and I have
written a lot of SIMD code for image and array processing tasks. I am
well aware of what is possible using various styles of parallel
processing. There's a lot that can be done ... and a lot /more/ that
can't: the vast majority of all computing problems do not have any
known parallel solutions.
It's true that there is a lot of instruction level (micro-thread)
parallelism available in most programs. It is dificult to exploit
with current hardware. This is a topic frequently discussed in
Return to the
Search the comp.compilers archives again.