Related articles |
---|
[4 earlier articles] |
Re: pointer elimination in C pop@dcs.gla.ac.uk (Robin Popplestone) (1993-10-22) |
Re: pointer elimination in C macrakis@osf.org (1993-10-22) |
Re: pointer elimination in C henry@zoo.toronto.edu (1993-10-22) |
Re: pointer elimination in C mcdonald@kestrel.edu (1993-10-28) |
Re: pointer elimination in C ted@crl.nmsu.edu (1993-10-29) |
Re: pointer elimination in C rbe@yrloc.ipsa.reuter.COM (1993-11-01) |
Re: pointer elimination in C mcdonald@kestrel.edu (1993-11-03) |
Re: pointer elimination in C macrakis@osf.org (1993-11-03) |
Newsgroups: | comp.compilers |
From: | mcdonald@kestrel.edu (Jim McDonald) |
Keywords: | Lisp, design |
Organization: | Kestrel Institute, Palo Alto, CA |
References: | 93-10-032 93-10-147 |
Date: | Wed, 3 Nov 1993 04:06:17 GMT |
Robin Popplestone <pop@dcs.gla.ac.uk> writes:
|> on them in the interests of hygene. LISP for example, provides almost
|> nothing but pointers - typically only short integers and possibly short
|> floats will not be pointers.
mcdonald@kestrel.edu (Jim McDonald) writes:
|> Some objects have non-pointers as data:
ted@crl.nmsu.edu (Ted Dunning) writes:
|> Uhh... you just proved pop's point. All of the objects you mention do
|> exist in memory, but when you actually write lisp code, the values
|> manipulated are pointers to these objects.
It depends on what you mean by "values manipulated".
Consider this trivial session on a sparcstation:
> (defun foo (x y) (declare (type double-float x y)) (* (+ x y) (+ x y)))
FOO
> (compile 'foo)
FOO
> (disassemble 'foo)
move.d [%in1 + 2], %f2
move.d [%in0 + 2], %f4
faddd %f4, %f2, %f6
faddd %f4, %f2, %f8
fmuld %f6, %f8, %f10
sethi %hi(#x30000), %u3
or %u3, 866, %u3
jmpl %nra, %sq + 1431 ; NEW-OTHER-FASTERCALL
move 16, %u2
move.d %f10, [%x0 + 2]
move %x0, %in0
jmpl %0, %ra + 8
restore %0, 4, %u0
NIL
The arguments are assumed to be pointers to floats. (For foo to accept
the floats directly would require restrictions on all the callers to foo
as well, and they might have already been compiled before this.) [The
move.d instructions extract the float values given by those typed
pointers.]
The result is a pointer at a newly created float, for the same reason the
arguments were pointers. [Everything from sethi on is involved in this.]
However, the internal expression maniplulates the sums directly as
arguments to the product, since here the compiler does know the complete
context. [I.e. the fmuld uses the faddd results directly.]
So if by "values manipulated", you mean x and y and the result, you are
correct, but if you mean all the intermediate values as well, the session
above is a counter-example. (This example is meager--just one instruction
avoided pointers, but in complex code with loops, etc. the direct
operations would dominate.)
For an example with more complex data, when I type
(proclaim '(type (array double-float) aa))
(setq aa (make-array '(4) ;; i.e., one dimension, of size 4
:element-type 'double-float
:initial-contents '(1.1 2.2 3.3 4.4)))
I get *one* typed pointer, at a structure containing four numbers (as
opposed to a pointer at a vector of four pointers at numbers).
As with the code for foo above, an inner product routine on vectors x and
y might create a new vector and fill it by placing float products created
directly from correlated floats in x and y. The only pointers would be
internal index pointers similar to C's p++ construct, to step through the
vectors.
In general, a good lisp compiler should produce only those pointers
required for correctness. The more guarantees you can give it about the
environment, e.g. via type declarations, the more it should be able to
eliminate pointer manipulation, with something a bit more efficient than C
as the limiting case.
In practive, people tend to prototype lisp code with very few
declarations, because they quite rationally prefer to save their time
rather than the computer's, and then most code never needs to be optimized
beyond prototype quality because of the 95/5 rule.
I find it frustrating that such a rational approach, in which programmer
time is optimized on the bulk of the coding cost (the 95% of the code that
runs 5% of the time) and computer time is optimized on the bulk of the
running cost (the 5% of code that uses 95% of the time), leads people to
think the 95% left unoptimized is all there is to lisp. [I say 95/5
instead of 80/20 to include all the prototypes that are run a few times
and then tossed.]
I'll get off my hobby horse now.
--
James McDonald
Kestrel Institute mcdonald@kestrel.edu
3260 Hillview Ave. (415) 493-6871 ext. 339
Palo Alto, CA 94304 fax: (415) 424-1807
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.