Re: fast arithmetic hardware, was These days what percentage of a CPU's work involves doing arithmetic computations

gah4 <gah4@u.washington.edu>
Thu, 15 Jul 2021 23:49:55 -0700 (PDT)

          From comp.compilers

Related articles
| List of all articles for this month |

From: gah4 <gah4@u.washington.edu>
Newsgroups: comp.compilers
Date: Thu, 15 Jul 2021 23:49:55 -0700 (PDT)
Organization: Compilers Central
References: 21-07-004 21-07-006
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="14346"; mail-complaints-to="abuse@iecc.com"
Keywords: architecture, history, parallel
Posted-Date: 16 Jul 2021 12:10:54 EDT
In-Reply-To: 21-07-006

On Thursday, July 15, 2021 at 10:22:58 AM UTC-7, gah4 wrote:


(snip, I wrote)


> Close to zero. Remember, the CPU is most of the time sitting there waiting
> for you to do something. Some systems have an actual "null job",
> accumulating the CPU time not used for anything else.


(snip, then our moderator wrote)


> [I generally agree except to note that modern PCs and particularly phones
> display a lot of high quality images and video, both of which require
> extensive arithmetic to get from the internal representation to the bitmap on
> the screen. General purpose CPUs have extended instruction sets like
> Intel's SSE and AVX, and often there are GPUs on the same chip as the
> CPU, as in the Apple M1. I get the impression that compilers don't
> deal very well with these things, so vendors provide large libraries
> of assembler code to use them. -John]


Yes, I wasn't so sure what was "olden days" and what is "new days".


There is pretty much a continuous change, as processors get faster,
then less efficient processing makes more sense.
Among others, less efficient, interpreted languages have become
more popular.


It is interesting, though. For much of the 1990's, faster and faster
processor became available for compute intensive applications like
computational physics, but mostly driven by demand from other uses.


Some of that was people who bought faster processors because they
could, and some by gaming. For the most part, processors haven't been
built for compute intensive use from about the 1990's.


In the 1980's, there were some coprocessor to speed up compute intensive
problems, such as FPS (Floating Point Systems). But as desktop computers,
and especially x86 machines, got faster there was less need for them.


And then GPUs to speed up graphics, mostly for games, but then compute
intensive users found that they could use them, too. Except that most are only
single precision.


As for compilers: In Fortran95, the FORALL statement was added, a non-loop
parallel assignment statement well designed for vector processors, just when
vector processors (like ones made by Cray) were going away.


FORALL requires (at least the effect of) the whole right side be evaluated
before the left side is changed. So it isn't actually well designed for vector
processors, with vector registers, like the Cray-1.


So now there is DO CONCURRENT. Completely different from FORALL,
and hopefully more adapted to modern processors. But I don't know so
well how it does with SSE and such.



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.