Re: A theoretical question or two

Ray Dillinger <bear@sonic.net>
30 Jan 2002 20:43:44 -0500

          From comp.compilers

Related articles
A theoretical question or two rickh@capaccess.org (2002-01-28)
Re: A theoretical question or two bear@sonic.net (Ray Dillinger) (2002-01-30)
Re: A theoretical question or two rickh@capaccess.org (2002-02-06)
Re: A theoretical question or two bear@sonic.net (Ray Dillinger) (2002-02-28)
| List of all articles for this month |

From: Ray Dillinger <bear@sonic.net>
Newsgroups: comp.compilers
Date: 30 Jan 2002 20:43:44 -0500
Organization: Compilers Central
References: 02-01-154
Keywords: theory
Posted-Date: 30 Jan 2002 20:43:44 EST

Rick Hohensee wrote:
>
> Is there a metric for computer performance in terms of the
> state-change throughput?


Not that I'm aware of. I don't even know if it would be
meaningful.


> Does greater locality of reference imply a smaller state-machine?


No. An array where each element refers only to its neighbors
has excellent locality of reference, but can be as long as you
make it. OTOH, a smaller state machine does imply a greater
locality of reference, because the memory set is small enough
that all of it can be local.




> This interests me because Forth code is supposed to have "poor"
> locality of reference, and because I am wondering about how one might
> "enrich the fuel through the Von Neumann Bottleneck" by radical
> state-changes, and what the performance bounds might be, and if they
> might be knowable.
>
> To put it in a more subjective form, is a white-noise generator a more
> powerful computer than a pink-noise generator? When do you know a
> particular computer is putting out the least pink (least non-random)
> noise it can? At what point is computational (noise) whiteness not
> useful?
>
> Another question,
>
> Consider some unifying level of implementation that all digital
> computers can be implemented at for inter-architechture
> comparison. Let's say you have a wide range of architechtures
> implemented in that one technology. Let's say you have a large room
> with a couple stack machines, a hypercube, a ccNUMA, a 386, a CRAY, a
> PDP, a SPARC, an MC68000, A Finite Turing Machine, etc. all with the
> same physical type of logic gates and memory. Is there an algorithm
> for each machine that will run no better on any other machine? What if
> each basic design is allowed the same overall number of gates and
> memory bits? Which machines are most divergent as to what they do
> well?


The hypercube will excel at simulations and physical modeling,
because those are implementable as cellular automata which can
be handled in SIMD parallelism.


I'd expect the SPARC to run generally faster than the MC68000
or the 386, because the simpler CPU design, even in the same
manufacturing technology, won't produce as much heat, so it
can be pushed faster. For much the same reason, the MC68000
will likely run a little bit faster (not much) than the 386.
But I don't think there's a *task* per se that distinguishes
these processors - they're all single-tasking stack-oriented
CPUs with a general instruction set. The SPARC gets a bit of
speed from being a RISC architecture, but they're basically
alike.


I'm not familiar enough with the other CPU's to say much about
them, but VLIW machines will be able to deal with a lot of
different models of computation without choking -- for example,
if you use a non-stack program model, your performance on the
SPARC/68000/386 would degrade noticeably, but it wouldn't
make a difference to the VLIW machine.


Bear


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.