|A theoretical question or two email@example.com (2002-01-28)|
|Re: A theoretical question or two firstname.lastname@example.org (Ray Dillinger) (2002-01-30)|
|Re: A theoretical question or two email@example.com (2002-02-06)|
|Re: A theoretical question or two firstname.lastname@example.org (Ray Dillinger) (2002-02-28)|
|From:||email@example.com (Rick Hohensee)|
|Date:||28 Jan 2002 01:13:49 -0500|
|Posted-Date:||28 Jan 2002 01:13:49 EST|
Is there a metric for computer performance in terms of the
Does greater locality of reference imply a smaller state-machine?
This interests me because Forth code is supposed to have "poor"
locality of reference, and because I am wondering about how one might
"enrich the fuel through the Von Neumann Bottleneck" by radical
state-changes, and what the performance bounds might be, and if they
might be knowable.
To put it in a more subjective form, is a white-noise generator a more
powerful computer than a pink-noise generator? When do you know a
particular computer is putting out the least pink (least non-random)
noise it can? At what point is computational (noise) whiteness not
Consider some unifying level of implementation that all digital
computers can be implemented at for inter-architechture
comparison. Let's say you have a wide range of architechtures
implemented in that one technology. Let's say you have a large room
with a couple stack machines, a hypercube, a ccNUMA, a 386, a CRAY, a
PDP, a SPARC, an MC68000, A Finite Turing Machine, etc. all with the
same physical type of logic gates and memory. Is there an algorithm
for each machine that will run no better on any other machine? What if
each basic design is allowed the same overall number of gates and
memory bits? Which machines are most divergent as to what they do
Return to the
Search the comp.compilers archives again.