Re: Fat references

Jon Harrop <jon@ffconsultancy.com>
Sun, 03 Jan 2010 00:42:26 +0000

          From comp.compilers

Related articles
[9 earlier articles]
Re: Fat references jon@ffconsultancy.com (Jon Harrop) (2010-01-01)
Re: Fat references cr88192@hotmail.com (BGB / cr88192) (2010-01-01)
Re: Fat references cr88192@hotmail.com (BGB / cr88192) (2010-01-01)
Re: Fat references anton@mips.complang.tuwien.ac.at (2010-01-02)
Re: Fat references gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-01-02)
Re: Fat references bobduff@shell01.TheWorld.com (Robert A Duff) (2010-01-02)
Re: Fat references jon@ffconsultancy.com (Jon Harrop) (2010-01-03)
Re: Fat references DrDiettrich1@aol.com (Hans-Peter Diettrich) (2010-01-03)
Re: Fat references bear@sonic.net (Ray) (2010-01-03)
Re: Fat references anton@mips.complang.tuwien.ac.at (2010-01-03)
Re: Fat references gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-01-03)
Re: Fat references jon@ffconsultancy.com (Jon Harrop) (2010-01-04)
Re: Fat references kkylheku@gmail.com (Kaz Kylheku) (2010-01-04)
[13 later articles]
| List of all articles for this month |
From: Jon Harrop <jon@ffconsultancy.com>
Newsgroups: comp.compilers
Date: Sun, 03 Jan 2010 00:42:26 +0000
Organization: Flying Frog Consultancy Ltd.
References: 09-12-045 09-12-055 10-01-005 10-01-007
Keywords: storage, GC
Posted-Date: 03 Jan 2010 14:42:14 EST

BGB / cr88192 wrote:
> "Jon Harrop" <jon@ffconsultancy.com> wrote in message
>> Kaz Kylheku wrote:
>>> So now you have references that can't fit into a machine register.
>>
>> If you don't have 128- or 256-bit registers on your 32- or 64-bit
>> machine, respectively.
>
> one needs AVX to have 256 bit registers at present (on x86 / x86-64 and
> friends).
>
> given AVX is not (yet) available in general "run of the mill" CPUs,
> this seems to be a bit agressive. similarly, 256 bit pointers are
> hardly cheap...


On the contrary, HLVM's benchmark results have proven that fat references
are cheap.


>> I see. That is not what I'm doing but, besides atomicity, what is the
>> motivation for trying to squeeze a reference into a pointer?
>
> the main thing I am thinking here is:
> 1, saving memory:
> given that, typically, pointers are one of the most common value types
> (especially in dynamic languages), large pointers translates to HUGE
> increases in memory use.


No. Moving the header data from the heap-allocated target to the local
source of the reference only increases memory consumption by 96 bits
for every *duplicate* reference stored in the heap, i.e. when two heap
objects A and B both reference C then the metadata is duplicated in A
and B. I don't believe duplicate references in the heap are common at
all.


Moreover, HLVM is specifically designed to reduce the number of
references in the heap. Most notably by unboxing multiple values
(tuples) and all primitive types. In contrast, languages like OCaml
box multiple values (e.g. pairs) and box many basic types such as
floats.


> 2. performance:
> increasing memory use greatly reduces cache efficiency,


Yes.


> and as a result, code will run much slower on average.


No. HLVM has certainly proven that wrong. Indeed, disproving such
accepted wisdom is an important part of why I think these results are
of wider interest.


For example, I just benchmarked a simple hash table with arrays for
buckets. Note that the spine contains the *only* reference in the
heap to each bucket: there are no duplicate references so memory
consumption is no worse than with 1-word references. OCaml is probably
the world's fastest functional language implementation for this kind
of task but HLVM is 2.2x faster than OCaml on this benchmark (!).


> as I sometimes say, having lots of a resource is not a good reason
> to waste it. just because a modern computer has > 1GB of RAM (my
> desktop has 6GB and my laptop 4GB), does not mean it is a good idea
> to increase memory use by a huge factor without good reason.


I don't believe HLVM's design will bloat the heap by a huge
factor. Indeed, can you think of a practical solution to a problem
that entails large numbers of duplicate references stored in the heap?


> it is typically better to use more of a resource to have more stuff,
> than to squander it and have the same, or less, overall.


I don't believe so.


> much like how now, a person can easily have multiple TB of hard-drive
> space, yet data compression has by no means become meaningless.


Sure but there are clear trade-offs and compressing data
representations in RAM to fit in a smaller fraction of a cache line is
clearly diminishing returns.


> to make little savings at the bottom can often translate to big savings
> overall...


That is certainly wrong. Language implementations like OCaml and
Haskell go to ridiculous lengths to shrink data representations
(e.g. placing a tag bit in each int) but HLVM and F# wipe the floor
with them in benchmarks.


> hence my idea is maybe to consider a scheme where, most of the time,
> pointers remain at their natural size, and can occasionally route through
> another structure for added functionality (this structure need not be
> necessarily heap-based). alternatively, most info can be placed in the
> object (as is often done).
>
> in the structure based strategy, also possible could be to have an "object
> indirection table", which would basically be used for any item addressed
> via a fat pointer. in this case, the narrow pointer encodes an index or
> pointer into this table, and the table in turn refers to the object (with
> any added info).
>
> this would allow greatly reducing overall costs to multiple references to
> the same object.


But does that improve memory consumption and performance for the uncommon
case of duplicate references at the cost of performance for all other
references?


> the main weak side of tagged references is that they are a hassle to work
> with from C, but at the same time, the same could be said of fat pointers
> (without explicit compiler support). actually, it may well be worse,
> because at least a tagged reference will usually fit nicely into an
> unsigned integer or similar...


You just need primitive operations to extract the underlying pointer
from a fat reference (the AddressOf operator in HLVM) and to construct
a fat reference from a pointer and a type (not yet implemented in
HLVM).


> actually, I don't think it is a coincidence that there are many high level
> VM's which use tagged references, but relatively few which use fat
> pointers.
>
> a simple example of a tagged reference scheme:
> low 3 bits:
> 0/4: fixnum (2-bit 0 tag)
> 1/5: flonum (2 bit 1 tag)
> 2: object reference (28 bit index into object heap/table)
> 3: cons reference (28 bit index into cons-cell heap)
> 6: large object reference (28 bit reference into large objects table)
> 7: expanded reference, 5 additional tag bits
> 0: reserved
> 1: special constants (NULL, UNDEF, EOL, TRUE, FALSE, ...)
> 2: character
> 3: remote object handle
> ...


That is exactly the kind of low-level bit twiddling that old school
language implementations like OCaml, Haskell and even the JVM do. I
don't think it is such a great idea. For example, it makes OCaml's int
arithmetic 3x slower than C and boxing makes Java's hash tables 32x
slower than .NET's.


> even on a 64-bit system, a person may still use 32-bit tagged values,
> meaning LESS space use than if they used pointers (although, there is a
> finite limit on practical heap size, for example, 4GB worth of cons cells,
> ...).


You mean 2^32 cons cells. That will become a serious limitation quite
quickly. OCaml suffers from a similar legacy problem due to similar bit
twiddling: strings and arrays are limited to only 16Mb.


> the reason for giving so much space to fixnums and flonums is that these
> are usually very common types, and it is desirable to minimize the need
> for heap-allocated integers or floats.


If you're interested in performance and memory consumption then it is
absurd to ever heap allocate ints and floats.


--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.