From: | mayan@watson.ibm.com (Mayan Moudgill) |
Newsgroups: | comp.compilers |
Date: | 4 Aug 1998 22:20:02 -0400 |
Organization: | IBM_Research |
References: | 98-08-011 98-08-023 |
Keywords: | C, C++, performance, comment |
Bob Morgan <bob.morgan@digital.com> wrote:
> ...
>You can make the argument that a C++ compiler will be more complex,
>that it will take longer to get it to production quality, and it will
>take more resources. But the resources must be spent or the OO
>optimizations will not pay off.
[This is a follow up to this and most of the other responses] I think
a quick summary of various responses I have seen say: People are _not_
building C compilers any longer. Instead, they are instead building
compiler suites, or at least C++ compilers, which target C as a
special case. In either case, C does not get anything over the other
compilers. And, of course, it is more complicated to get these
multiple compilers right, but people choose to spend their resources
there.
My comment: I know (and probably agree with) the trend; it means that
in practice, C compilers are being restricted to the level of the
other compilers. It would be interesting to see what would happen if
somebody said - lets write _just_ a C compiler, and throw the same
kinds of resources at that.
>A side comment: have you ever wondered when the technology was
>available to build an optimizing compiler as effective as the
>compilers available today? I claim that all the technology needed to
>build a parallelizing, vectorizing, compiler for RISC computers was
>available in 1972. (well maybe not trace scheduling). At that point
>instruction scheduling was known, parallelization was known (ILLIAC
>IV), optimization was known, register allocation was known, etc.
>
>Then, what have we been doing for the last 26 years? We have been
>building simpler models and algorithms to make the compilers smaller
>and more bug free.
Actually, we have been waiting for machines (memory size + CPU speed)
to come to the point where we don't have to take short-cuts to keep
compile time speeds up, and where we can actually afford to keep and
manipulate large graphs in memory. (Think of what you can do with 64
M; now try doing it in 64 k).
You couldn't afford to do IPA or even whole function analysis with
small memories, even if you know how to do it theoretically.
(Of course, to contradict myself the original IBM FORTRAN 66 compiler
probably had some flavor of most of the intra-function optimizations
that are commonly in use today)
>This point feeds directly into your question. Today an optimizing C++
>compiler will be bigger than a C compiler and harder to develop;
>however, the role of compiler research is to build simpler algorithms
>with the same effectiveness so that the C++ compiler will eventually
>be smaller and use simpler algorithms.
Smaller than current C++ compilers - maybe, if they don't keep
bloating the language; smaller than an equivalent C-only compiler - I
wonder.
Mayan
--
| Mayan Moudgill
| mayan@watson.ibm.com
[Fortran H did a lot of swell optimizations, but you only got the full
benefit on fairly small routines. -John]
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.