From: | anton@mips.complang.tuwien.ac.at (Anton Ertl) |
Newsgroups: | comp.compilers |
Date: | Fri, 03 Feb 2023 11:44:03 GMT |
Organization: | Institut fuer Computersprachen, Technische Universitaet Wien |
References: | <Adkz+TvWa4zLl8W9Qd6ovtClKZpZrA==> 23-01-078 23-02-001 23-02-007 |
Injection-Info: | gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="98324"; mail-complaints-to="abuse@iecc.com" |
Keywords: | UNCOL, architecture, comment |
Posted-Date: | 03 Feb 2023 13:17:01 EST |
gah4 <gah4@u.washington.edu> writes:
>I have wondered, though, about a standardized intermediate
>for a processor family. One could write compilers to generate it,
>and then updated processors come along with updated code
>generators. Or even distribute intermediate code, and the
>installer generates the right version for the processor.
>
>This would have been especially useful for Itanium, which
>(mostly) failed due to problems with code generation.
I dispute the latter claim. My take is that IA-64 failed because the
original assumption that in-order performance would exceed OoO
performance was wrong. OoO processors surpassed in-order CPUs; they
managed to get higher clock rates (my guess is that this is due to
them having smaller feedback loops) and they benefit from better
branch prediction, which extends to 512-instruction reorder buffers on
recent Intel CPUs, far beyond what compilers can achieve on IA-64.
The death knell for IA-64 competetiveness was the introduction of SIMD
instruction set extensions which made OoO CPUs surpass IA-64 even in
those vectorizable codes where IA-64 had been competetive.
>Since the whole idea is that the processor depends on the
>code generator doing things in the right order. That is, out
>of order execution, but determined at compile time. Failure
>to do that meant failure for the whole idea.
But essentially all sold IA-64 CPUs were variations of the McKinley
microarchitecture as far as performance characteristics were
concerned, especially during the time when IA-64 was still perceived
as relevant. The next microarchitecture Poulson was only released in
2012 when IA-64 had already lost.
>[Someone comes up with an intermediate language that works for a few
>source languages and a few targets, and usually publishes a paper
>about his breakthrough. Then people try to add more front ends and
>back ends, the intermediate language gets impossibly complex and
>buggy, and the project is quietly forgotten. I'd think the back end
>problem is a lot easier now than it was in 1958 since everything is
>twos complement arithmetic in 8-bit bytes
Yes. Computer architecture converges. 20 years ago we still had to
worry about alignment and byte ordering, nowadays alignment has been
settled in general-purpose CPUs (no alignment restrictions), and byte
ordering is mostly settled to little-endian (exception s390/s390x).
>If you don't push it too far it's certainly possible to do a lot of
>work in a largely machine-independent way as LLVM does. -John]
LLVM mostly supports what the hardware supports, but there is at least
one exception: LLVM IR divides the code into functions, that you can
only call, with LLVM handling the calling convention. When someone
needs to deviate from the calling convention, like the Haskel/C--
people, LLVM provides some possibilities, but from what I read, they
had a hard time pulling it through.
- anton
--
M. Anton Ertl
anton@mips.complang.tuwien.ac.at
http://www.complang.tuwien.ac.at/anton/
[I'm with you on IA-64 and VLIW. I knew the Multiflow people pretty
well and it is evident in retrospect that while it was a good fit
with what you could build in the 1980s, hardware soon reached the
point where it could do better what VLIW compilers had done. -John]
Return to the
comp.compilers page.
Search the
comp.compilers archives again.