Re: PALM challenge

gah4 <>
Mon, 3 Oct 2022 04:37:01 -0700 (PDT)

          From comp.compilers

Related articles
PALM challenge (Steve Lewis) (2022-10-01)
Re: PALM challenge (gah4) (2022-10-01)
Re: PALM challenge (Thomas Koenig) (2022-10-02)
Re: PALM challenge (gah4) (2022-10-03)
| List of all articles for this month |

From: gah4 <>
Newsgroups: comp.compilers
Date: Mon, 3 Oct 2022 04:37:01 -0700 (PDT)
Organization: Compilers Central
References: 22-10-001 22-10-011
Injection-Info:; posting-host=""; logging-data="43224"; mail-complaints-to=""
Keywords: history, translator, comment
Posted-Date: 03 Oct 2022 15:17:47 EDT
In-Reply-To: 22-10-011

On Sunday, October 2, 2022 at 5:58:33 PM UTC-7, Thomas Koenig wrote:

> Interesting having 16-bit integers but only an 8-bit ALU, where
> a carry for addition is added to the upper bit, would need a few
> instrucions for a 16-bit addtion. It would be straightforward
> to write out such an instruction sequence each time a 16-bit
> addition was required, though.

The small ALU was usual at the time. The 360/30 and 360/40 both
have an 8 bit ALU, but implement the usual 32 bit S/360. It just
takes more microinstructions, and so more time. They even
implement the S/360 double precision 64 bit hexadecimal
floating point with that 8 bit ALU.

The 360/20 has a four bit ALU that only can add or subtract 1.
Binary or BCD 4 bit arithmetic is done in a microcode subroutine
loop, which is then called to implement larger operations.
It includes the full S/360 decimal instruction set, with up to 31
digit operands, including multiply and divide. But no 32 bit
binary instructions and no floating point.

As well as I know it, the ALU runs parity all the way though, so given
two 8 bit operands plus parity, it generates the 8 bit sum, or other
operation, with parity.

> [IBM programmed the PALM to simulate most of S/360 to run APL\360 and
> the System/3 mini to run the BASIC interpreter, both I assume hand
> coded in assembler. It was a tour de force at the time. I agree that
> generating code for C doesn't look hard, just tedious. -John]

The description I thought I remembered was, instead of interpreting
the user level instructions, like usual for microcoded machines,
they are compiled into microcode. If you have a microcode subroutine
call instruction, you write the code for the larger operations once,
and then call the micro-subroutine.
[Pretty sure the 5100 didn't do that due to its modest speed and memory.
There are certainly modern JIT translators. Apple's Rosetta works that way.
Fun fact: 360/30 long floating divide took 1.6ms and I don't mean us. -John]

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.