Related articles |
---|
How Smart Can We Afford to be? jjones@uiuc.edu (1992-02-10) |
Re: How Smart Can We Afford to be? preston@dawn.cs.rice.edu (1992-02-12) |
Re: How Smart Can We Afford to be? metzger@bach.convex.com (1992-02-12) |
Re: How Smart Can We Afford to be? bill@hcx2.ssd.csd.harris.com (1992-02-13) |
Re: How Smart Can We Afford to be? idacrd!desj@uunet.uu.net (1992-02-24) |
Newsgroups: | comp.arch,comp.compilers |
From: | preston@dawn.cs.rice.edu (Preston Briggs) |
Keywords: | architecture, design |
Organization: | Rice University, Houston |
References: | 92-02-046 |
Date: | Wed, 12 Feb 1992 06:59:28 GMT |
In article 92-02-046 jjones@uiuc.edu writes:
>6. Can sophisticated optimizing compilers be built correctly in a timely
> fashion?
Many people say "absolutely not", mainly because of the correctness
requirement. I guess there are a lot of examples that suggest it's hard.
Is it harder than other interesting software? I'm not sure.
The definition of "timely" is also important here. There are recent
interesting compilers from IBM and Convex. How "timely" were they?
For the xlc and xlf compilers from IBM, I assume they were timely if they
didn't delay the introduction of the RS/6000. Were they cheap? Probably
not if we consider the amount of research money IBM has spent on compiler
research.
I expect Convex has spent far less developing their compiler system (how
could they possibly outspend IBM?). Was it developed in a "timely"
fashion? I don't think we (outside of Convex) can really tell. It's a
new (or enhanced) compiler for an existing machine.
Generally though, I think it _is_ possible to develop high-quality
optimizing compilers to meet a budget. Very important to avoid getting
bogged down in research though. Probably want a very strong leader who
can see the project as a whole from end-to-end.
>7. What are the tradeoffs in writing a compiler that takes advantage of
> lots of registers, versus a compiler that does a good job of instruction
> scheduling and taking advantage of a pipeline?
There are people who understand how to build good global register
allocators, at least for certain classes of machines.
I think scheduling is a less well-understood problem. Certainly many
people can build a scheduler that works over basic blocks. There's also
various forms of software pipelining for loops, and trace scheduling or
global compression for whole procedures. The fancier techniques are still
very researchy.
Tradeoffs? Hard to do without either. Most traditional optimizations
(including scheduling) assume a good register allocator and an adequate
register set. It's a necessary separation of concerns. But lack of
adequate scheduling can cost integer factors in performance on some
machine (say the i860), especially for scientific code.
Of course, I tend to think of register allocation as the keystone of any
optimizing compiler. Therefore, take with appropriate salt.
Preston Briggs
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.