Related articles |
---|
A theoretical question or two rickh@capaccess.org (2002-01-28) |
Re: A theoretical question or two bear@sonic.net (Ray Dillinger) (2002-01-30) |
Re: A theoretical question or two rickh@capaccess.org (2002-02-06) |
Re: A theoretical question or two bear@sonic.net (Ray Dillinger) (2002-02-28) |
From: | rickh@capaccess.org (Rick Hohensee) |
Newsgroups: | comp.compilers |
Date: | 6 Feb 2002 23:34:45 -0500 |
Organization: | http://groups.google.com/ |
References: | 02-01-154 02-01-173 |
Keywords: | theory |
Posted-Date: | 06 Feb 2002 23:34:45 EST |
Ray Dillinger <bear@sonic.net> wrote in message news:02-01-173...
Rick Hohensee wrote:
>>
>> Is there a metric for computer performance in terms of the
>> state-change throughput?
>
>Not that I'm aware of. I don't even know if it would be
>meaningful.
>
Making it meaningful is the challenge I guess. As in,
"What's the state-change-value difference between the
Accumulator and memory location 0x37854?".
>> Does greater locality of reference imply a smaller state-machine?
>
>No. An array where each element refers only to its neighbors
>has excellent locality of reference, but can be as long as you
>make it. OTOH, a smaller state machine does imply a greater
>locality of reference, because the memory set is small enough
>that all of it can be local.
I don't think arrays are a valid counterexample without some
instructions.
>> Consider some unifying level of implementation that all digital
>> computers can be implemented at for inter-architechture
>> comparison. Let's say you have a wide range of architechtures
>> implemented in that one technology. Let's say you have a large room
>> with a couple stack machines, a hypercube, a ccNUMA, a 386, a CRAY, a
>> PDP, a SPARC, an MC68000, A Finite Turing Machine, etc. all with the
>> same physical type of logic gates and memory. Is there an algorithm
>> for each machine that will run no better on any other machine? What if
>> each basic design is allowed the same overall number of gates and
>> memory bits? Which machines are most divergent as to what they do
>> well?
>
>The hypercube will excel at simulations and physical modeling,
>because those are implementable as cellular automata which can
>be handled in SIMD parallelism.
Yeah, size matters there. If everybody gets the same amount
of machinery though, then I wonder.
>I'd expect the SPARC to run generally faster than the MC68000
>or the 386, because the simpler CPU design, even in the same
>manufacturing technology, won't produce as much heat, so it
>can be pushed faster. For much the same reason, the MC68000
>will likely run a little bit faster (not much) than the 386.
>But I don't think there's a *task* per se that distinguishes
>these processors - they're all single-tasking stack-oriented
>CPUs with a general instruction set. The SPARC gets a bit of
>speed from being a RISC architecture, but they're basically
>alike.
Interesting, although I kinda was thinking more in terms of
at one comparison-wide clock. But then there's unclocked
stuff :o) I think the 386 might beat a 68k at the same clock
due to code density, which seems pretty good to me on 386
for a register machine.
>I'm not familiar enough with the other CPU's to say much about
>them, but VLIW machines will be able to deal with a lot of
>different models of computation without choking -- for example,
>if you use a non-stack program model, your performance on the
>SPARC/68000/386 would degrade noticeably, but it wouldn't
>make a difference to the VLIW machine.
>
Sorry to omit VLIW in my previous :o)
Rick Hohensee
Return to the
comp.compilers page.
Search the
comp.compilers archives again.