Related articles |
---|
A theoretical question or two rickh@capaccess.org (2002-01-28) |
Re: A theoretical question or two bear@sonic.net (Ray Dillinger) (2002-01-30) |
Re: A theoretical question or two rickh@capaccess.org (2002-02-06) |
Re: A theoretical question or two bear@sonic.net (Ray Dillinger) (2002-02-28) |
From: | rickh@capaccess.org (Rick Hohensee) |
Newsgroups: | comp.compilers |
Date: | 28 Jan 2002 01:13:49 -0500 |
Organization: | http://groups.google.com/ |
Keywords: | theory, question |
Posted-Date: | 28 Jan 2002 01:13:49 EST |
Is there a metric for computer performance in terms of the
state-change throughput?
Does greater locality of reference imply a smaller state-machine?
This interests me because Forth code is supposed to have "poor"
locality of reference, and because I am wondering about how one might
"enrich the fuel through the Von Neumann Bottleneck" by radical
state-changes, and what the performance bounds might be, and if they
might be knowable.
To put it in a more subjective form, is a white-noise generator a more
powerful computer than a pink-noise generator? When do you know a
particular computer is putting out the least pink (least non-random)
noise it can? At what point is computational (noise) whiteness not
useful?
Another question,
Consider some unifying level of implementation that all digital
computers can be implemented at for inter-architechture
comparison. Let's say you have a wide range of architechtures
implemented in that one technology. Let's say you have a large room
with a couple stack machines, a hypercube, a ccNUMA, a 386, a CRAY, a
PDP, a SPARC, an MC68000, A Finite Turing Machine, etc. all with the
same physical type of logic gates and memory. Is there an algorithm
for each machine that will run no better on any other machine? What if
each basic design is allowed the same overall number of gates and
memory bits? Which machines are most divergent as to what they do
well?
Rick Hohensee
Return to the
comp.compilers page.
Search the
comp.compilers archives again.