Newsgroups: | comp.compilers |
From: | andrew@rentec.com (Andrew Mullhaupt) |
Keywords: | registers, optimize |
Organization: | Renaissance Technologies Corp., Setauket, NY. |
References: | 92-05-123 92-06-006 |
Date: | Wed, 3 Jun 1992 14:53:00 GMT |
preston@dawn.cs.rice.edu (Preston Briggs) writes:
>only a little. In your example of IMSL routines, I would expect that call
>overhead (including register store and reloads) is completely swamped by
>the cost of floating-point compuations (i.e., useful work).
Don't be so sure. I wouldn't be aware of the problem _as_ a problem unless
I had examples where this wan't true. In particular, there are many times
when, because of the fact that you are control bound in the caller, and
that the decisions depend on a large and highly structured data object in
the caller. This happens naturally when you are developing
multidimensional adaptive algorithms. A simple case is that of a
multidimensional adaptive integration which calls a FORTRAN coded local
integration routine. You call the local integrator, and then it calls you
back many (15 in the case of QK15) times. For each of these callbacks you
do one FLOP, excluding what the function really takes, but it is often
enough that the integral of small function is complicated enough to pay
the freight. If you're integrating off of interpolated data, the
individual function calls will be quite small, but this will be (as far as
I can tell) the only economical way to proceed.
Then the other easy way to get interface bound occurs when you stick a new
OS feature onto old code. In order to be able to deal with large problems,
we recently replaced the traditional APL component file system in the
commercial interpreter we use with an 'auxilliary processor' which uses
memory mapping, and is capable of using sparse files. Despite the fact
that arguments are copied by the APL interpreter (and we can't do anything
about it either) there is still a performance benefit as well as the
ability to touch huge amounts of data without the APL interpreter growing
steadily and slowing down by paging. So this is an idea which is a big
performance win already, but the thing that limits the performance is the
argument passing. This limitation (unlike the previous case) applies to
all the code which uses this interface, and that will be quite a bit of
what gets done in APL from now on. In other words, we're going to be
wasting a lot of cycles on argument passing.
>... people like
>IMSL provide an interface definition and compilers can use the definition
>to provide checking. More complete specifications would allow
>interprocedural optimization.
Not if you're linking IMSL into say the APL interpreter you didn't compile
either... I agree that when you can compile, this kind of definition is
probably a good thing. But the more I can 'plug and play' the less I will
be compiling. This is why I like Spackman's approach of making the OS (and
by implication the loader/linker) responsible.
Can an interface spec. _really_ improve compiliation of units which are to
be dynamically loaded into an alien host program? I can't see this issue
clearly enough to answer this question.
>The idea of the OS knowing about an interesting type system seems wrong.
>Levine says the OS could simply compare tags for equality without knowing
>about the types. This only works for the simplest of type systems (and
>where is the set of tags defined?). For more interesting and useful type
>schemes (a la Oberon, ML, ...) you want much more complex approaches.
Yes. Especially if you use ANSI Standard Extended Pascal* schema, you may
have a lot of tagging to do.
Later,
Andrew Mullhaupt
*The ANSI Standard for Pascal is from 1990. A scheme is actually a family
of types, but Pascal functions can accept arguments from a scheme, so if
the OS wants to support this, it might require a lot of Pascal compiler
built into the linker. This aside provided by the committee to prevent
total ignorance of Extended Pascal.
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.