|200 way issue? firstname.lastname@example.org (1993-09-29)|
|Re: 200 way issue? email@example.com (1993-09-30)|
|Re: 200 way issue? firstname.lastname@example.org (1993-09-30)|
|Re: 200 way issue? grover@brahmand.Eng.Sun.COM (1993-09-30)|
|Re: 200 way issue? email@example.com (1993-09-30)|
|Re: 200 way issue? firstname.lastname@example.org (1993-10-01)|
|Re: 200 way issue? email@example.com (1993-10-01)|
|[1 later articles]|
|From:||firstname.lastname@example.org (David Moore)|
|Date:||Wed, 29 Sep 1993 18:18:16 GMT|
There is a religous war going on in comp.sys.powerpc about which chip
(Alpha, MIPS, Powerpc) is the better.
Inevitably, the discussion descended to "my chip is about as fast as yours
but next year it will do 300Mhz and 8 way issue (I paraphrase)"
There are all sorts of hardware issues in trying to build a system around
such a chip. The result of which is often that the systems run benchmarks
real fast but the advantage on real problems is much less.
HOWEVER, the question I want to raise is this: How many way issue can one
actually use on real code.
Suppose, for example, we took the spec benchmarks and optimized for an
infinite issue machine. Now suppose we built a histogram of actual number
of instructions issued per machine cycle. Has anyone published a paper on
what this histogram would look like?
Intuition suggests that the mode of the distribution would be quite small
- probably 2. The mean might be rather larger because you can probably
produce some really high issue rates by unrolling some loops and leaving
memory constraints out of the model.
[Sounds a lot like a VLIW to me. Multiflow got useful work out of 28
parallel units, could probably have done more. -John]
Return to the
Search the comp.compilers archives again.