Related articles |
---|
AMPC 1.4.3 released (C to Java Class Files compiler suite) napi@axiomsol.com (napi) (2006-05-11) |
Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) englere.geo@yahoo.com (Eric) (2006-05-15) |
Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) gah@ugcs.caltech.edu (glen herrmannsfeldt) (2006-05-16) |
Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) englere.geo@yahoo.com (Eric) (2006-05-18) |
Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) DrDiettrich@compuserve.de (Hans-Peter Diettrich) (2006-05-22) |
Re: AMPC 1.4.3 released (C to Java Class Files compiler suite) gah@ugcs.caltech.edu (glen herrmannsfeldt) (2006-05-22) |
From: | glen herrmannsfeldt <gah@ugcs.caltech.edu> |
Newsgroups: | comp.compilers |
Date: | 22 May 2006 02:11:17 -0400 |
Organization: | Compilers Central |
References: | 06-05-03806-05-053 06-05-057 06-05-065 |
Keywords: | C, Java |
Posted-Date: | 22 May 2006 02:11:16 EDT |
Eric wrote:
> The problem comes in the form of terrible efficiency. As I understand
> it, array indexing in the JVM wouldn't be able to take advantage of the
> fact that you're accessing a sequential range of bytes. Something like
> this is very efficient in native code, but would slow down by maybe
> 10-100 times if you have to use array indexing and you have to re-index
> into the array each time thru the loop.
> This C code is very efficient:
> while(*p++)
> do_something(p);
I would say that the difference could be very system dependent.
Also, it assumes no bounds checking on the array. JVM and Java
always do bounds checking.
> Maybe a Java translation might be something like this:
> for (int i=0; i < p.length(); i++)
> do_something(p[i]);
> These look similar, but the C-produced native code can use an index
> register to scan memory efficiently. The Java-produced byte code would
> likely re-index into the array for each iteration through the loop.
Well, more important is what JIT can do. There is discussion that JIT
can recognize the loop bounds are the length of the array, or if not and
constant do the compare outside the loop. Many processor now should be
able to overlap the loop compare and the array access. Also, branch
prediction may be better on the loop.
I might also believe it more for a simple assignment, but if it includes
a call to do_something() in either language there is probably enough
overhead not to notice the difference.
> Array indexing is pretty slow because you don't get the advantage of
> knowing that you're accessing a sequential range of bytes. Maybe a JIT
> compiler could do something to make this more efficient, but it seems
> like this would perform poorly in an interpreted JVM.
For fairly small arrays, the difference is probably within the overhead,
especially if it involves allocation. My feeling is always that OO
programming tends to be slow because of excessive array allocation and
deallocation, independent of the language.
> However, if the JVM understood the concept of dereferencing a pointer
> to a memory location this could be much faster. This is done in the
> .NET CLR and they were able to do it while still keeping type safety
> and verifiable code. It's actually simple to define this type of
> operation, but it looks to me that Sun went out of their way to avoid
> doing this. Their fear of pointers was taken to an extreme, even though
> it could have been done in a safe manner.
I have used large arrays with Java, and usually wasn't too worried about
how slow they were. That is with JIT, though.
-- glen
Return to the
comp.compilers page.
Search the
comp.compilers archives again.