|Question: "Zuse's Device" firstname.lastname@example.org (Louis Glassy) (1999-04-01)|
|Re: Question: "Zuse's Device" email@example.com (1999-04-03)|
|Re: Question: "Zuse's Device" firstname.lastname@example.org (Daniel Guerrero Miralles) (1999-04-10)|
|From:||Daniel Guerrero Miralles <email@example.com>|
|Date:||10 Apr 1999 18:14:07 -0400|
|Organization:||Universitat Politecnica de Catalunya|
Louis Glassy wrote:
> For concreteness, suppose you had a big-endian machine with two's
> complement integer arithmetic, with 8-bit registers. A high bit
> of '0' is a nonnegative value, and a high bit of '1' is a negative
> value. Assume all variables are 8-bit integers.
> # get all the bits of k.
> m = ((!k7) & 0x01) & (k6 | k5 | k4 | k3 | k2 | k1 | k0)
> # now calculate what 'a' must be...
> a = a + (m * 1)
> Does this hackery have a name? In high school (1981?), we used to
> do this with 6502 machine code. So this is not rocket science.
> Konrad Zuse apparently knew about it when he was programming
> his Z4 computer in the forties. So this idea is not new, either.
No, it is not new. And you surely know (but maybe do not remember)
that digital circuits are designed this way. This is the common way of
creating functions with binary operations (and, or, nand, etc). You
can desing any kind of opperation with this approach, but the
cannonical way is using Carnaugh maps (uncertain spelling).
Because digital circuits are essentially parallel, this is really
efficient implemented in hardware, but software implementation tent to
be less efficient except for simple cases (software is essentially
sequential, and for a large number of bits you're wasting more time in
the general case). Also, this technique makes programs larger, and
this has a negative effect on performance of nowadays cached machines.
[Depends on how deep your pipeline is. On a really deeply pipelined
machine, it can easily be faster to do six sequential instructions
than one branch. -John]
Return to the
Search the comp.compilers archives again.