|AMPC 1.4.3 released (C to Java Class Files compiler suite) firstname.lastname@example.org (napi) (2006-05-11)|
|Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) email@example.com (Eric) (2006-05-15)|
|Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) firstname.lastname@example.org (glen herrmannsfeldt) (2006-05-16)|
|Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) email@example.com (Eric) (2006-05-18)|
|Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) DrDiettrich@compuserve.de (Hans-Peter Diettrich) (2006-05-22)|
|Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) firstname.lastname@example.org (glen herrmannsfeldt) (2006-05-22)|
|From:||Hans-Peter Diettrich <DrDiettrich@compuserve.de>|
|Date:||22 May 2006 02:10:04 -0400|
|References:||06-05-03806-05-053 06-05-057 06-05-065|
> The problem comes in the form of terrible efficiency. As I understand
> it, array indexing in the JVM wouldn't be able to take advantage of the
> fact that you're accessing a sequential range of bytes. Something like
> this is very efficient in native code, but would slow down by maybe
> 10-100 times if you have to use array indexing and you have to re-index
> into the array each time thru the loop.
Sorry, I get the impression that you don't understand address
calculations at all :-(
How should indexing be applied to an array, other than by adding an
offset to the base address of the array? Iff the array were not a
contiguous memory area, it would be impossible as well to traverse it
using pointer arithmetic.
> This C code is very efficient:
... in detail because it misses to process the first element :-(
> Maybe a Java translation might be something like this:
> for (int i=0; i < p.length(); i++)
> These look similar, but the C-produced native code can use an index
> register to scan memory efficiently.
This code does not hinder an (JIT...) compiler from doing optimizations,
which can outperform the not further optimizable pointer code in the
first example. The only runtime critical item is p.length(), as long as
it is not guaranteed that this value will not change during execution of
IMO it's not justified to assume that nothing can be faster than pointer
operations. This assumption may hold for an single pointer, but not
necessarily for multiple pointers and/or other variables, which the
compiler cannot optimize as a whole - in contrast to e.g. indexing
multiple arrays in the same loop. Other languages have true counted
loops, where the bounds are evaluated only once, so that the number of
iterations is known before entering the loop. I'm not sure about the
Java conventions here...
> However, if the JVM understood the concept of dereferencing a pointer
> to a memory location this could be much faster. This is done in the
> .NET CLR and they were able to do it while still keeping type safety
> and verifiable code.
You understand the difference between pointer dereferencing and pointer
> It's actually simple to define this type of
> operation, but it looks to me that Sun went out of their way to avoid
> doing this. Their fear of pointers was taken to an extreme, even though
> it could have been done in a safe manner.
It's actually much simpler to optimize pointerless code, *because* it
leaves room for optimizations. IMO nowadays pointers are good only for
people which *think* that they can optimize code better and safer than a
compiler can do. Just the chance for errors, like in your first example,
should discourage everybody from using pointers at all.
Return to the
Search the comp.compilers archives again.