|Re: Branch prediction email@example.com (2000-05-28)|
|Re: Branch prediction firstname.lastname@example.org (2000-05-31)|
|Re: Inline caching (was Re: Branch prediction) email@example.com (2000-06-01)|
|Date:||1 Jun 2000 18:01:22 -0400|
|Organization:||Deja.com - Before you buy.|
Inline caching works like this: after you grab an address from a hash
table that associates method selectors (that's Smalltalk or Self) to
JIT-compiled code, you patch the jump, so that instead of going to the
hash table lookup function it jumps directly to the JIT-compiled code.
The prolog of the code checks if the message's receiver's class is the
same as the class for which the JIT-compiled code was designed for; if
it is, you saved an expensive lookup; if it wasn't, you simply jump to
the lookup routine. The lookup routine will patch again the jump.
This technique has extremely high cache hits (typically), around 90%.
In addition, cache hit are extremely advantageous (+300% performance
is not exaggerate) and you don't lose much for a cache miss, usually
only 3 or 4 clock cycles.
Usually this is done by patching the jump. In GNU Smalltalk however I
cannot do that because GNU lightning (the library I wrote for portable
JIT compiling -- much like vcode) does not support compiling code that
patches a jump -- of course it supports backpatching forward
references... So I am doing an indirect jump, which at first goes to
the method lookup routine (mispredicted), the second time is also
mispredicted (it goes to the method's native code but it is predicted
to go to the method lookup routine), and is predicted correctly from
the third time.
> Modulo conflict and capacity misses.
Of course, but capacity and conflict problems are a problem when
patching the jump, either.
[It's a swell trick. Unix dynamically linked libraries do more or less
the same trick, patch in the jump address when it's looked up on the
first reference. -John]
Return to the
Search the comp.compilers archives again.