|EQNTOTT Vectors of 16 bit Numbers [Was: Re: Yikes!!! New 200Mhz Intel firstname.lastname@example.org (1995-11-09)|
|Re: EQNTOTT Vectors of 16 bit Numbers [Was: Re: Yikes!!! New 200Mhz In email@example.com (1995-11-14)|
|Re: EQNTOTT Vectors of 16 bit Numbers firstname.lastname@example.org (1995-11-17)|
|Re: EQNTOTT Vectors of 16 bit Numbers email@example.com (1995-11-19)|
|Re: EQNTOTT Vectors of 16 bit Numbers firstname.lastname@example.org (1995-11-20)|
|Re: EQNTOTT Vectors of 16 bit Numbers email@example.com (1995-11-20)|
|Re: EQNTOTT Vectors of 16 bit Numbers firstname.lastname@example.org (1995-11-21)|
|From:||email@example.com (Henry Baker)|
|Keywords:||benchmarks, optimize, APL|
|References:||<firstname.lastname@example.org> <email@example.com> 95-11-079 95-11-132|
|Date:||Sun, 19 Nov 1995 19:23:55 GMT|
firstname.lastname@example.org (David Keppel) wrote:
> >["Intel's special SPEC optimization."]
> This reminds me of APL benchmarking from twenty years ago. The
> interpreters recognized particular idioms and implemented them as
> special cases. The APL vendors (notably, IBM) were accused of
> "benchmark optimizations", though I think the original motivation
> was speeding up real use.
Actually, IBM's APL implementation was extremely well done, because
it optimized the code that got executed the most often. The thing
that embarrassed the Fortran'ers of the world was that fact that a
hand-optimized loop that dominates a computation can beat the pants
off an optimizing compiler every day of the week. For
not-terribly-large arrays, the APL interpreter itself takes very
little of the overall speed. The only time APL bogs down is when you
don't take advantage of the built-in array operations.
There was a problem with memory usage on non-compiled APL implementations,
but that is a different story entirely. Also, the APL approach wouldn't
work so well on modern machines, because modern machines are much more
sensitive to memory usage & locality.
Return to the
Search the comp.compilers archives again.