Re: Compiler bugs

David Chase <>
14 Jan 2002 02:00:14 -0500

          From comp.compilers

Related articles
re: Compiler bugs (David Chase) (2002-01-03)
Re: Compiler bugs (2002-01-05)
Re: Compiler bugs (Christian Bau) (2002-01-05)
Re: Compiler bugs (David Chase) (2002-01-14)
Re: Arithmetic accuracy, was Compiler bugs (Joachim Durchholz) (2002-01-16)
Bit-exact floating-point computations (Francois-Rene Rideau) (2002-01-17)
Re: floating point accuracy, was Compiler bugs (Christian Bau) (2002-01-17)
Re: floating point accuracy, was Compiler bugs (2002-01-18)
Re: Bit-exact floating-point computations (Christian Bau) (2002-01-18)
Re: Arithmetic accuracy, was Compiler bugs (2002-01-18)
[32 later articles]
| List of all articles for this month |

From: David Chase <>
Newsgroups: comp.compilers
Date: 14 Jan 2002 02:00:14 -0500
Organization: The World : : Since 1989
References: 02-01-015 02-01-029
Keywords: errors, arithmetic
Posted-Date: 14 Jan 2002 02:00:14 EST

Christian Bau wrote:

> This changed [Java] specification has two consequences:
> 1. You can produce slightly faster code for example on a PowerPC. The
> implementation of sine/cosine is now allowed for example to use fused
> multiply/add instructions and get results that are not bit for bit
> identical, but as good as the specification.
> 2. On a Pentium, you can implement sine/cosine by doing a quick check
> for hard values and using FSIN/FCOS in cases where you can guarantee
> they give results within 1 ulp and do it the hard way in other cases.
> For example, FSIN has an error < 1ulp for small values unless they are
> close to pi, 2pi, 3pi etc.

I got a bee in my bonnet to play with Hans Boehm's constructive reals
package again, and fdlibm seems to be generally quite good,
bit-for-bit equal to the (correctly rounded) truth 96.5% of the time,
and almost within 1/2 LSB for most of the remaining 3.5%. This is on
a sample of 100,000 inputs between zero and two PI (so I am not really
beating hard on range reduction here, just the basic calculation). I
may have found some bugs in the CR -> double conversion, so I computed
this information in less-than-obvious but correct-up-to-my- testing

Here's the 37 worst results. Bit 52 is the least significant bit; the
best possible in double machine arithmetic is to differ in bit "53",
or higher. 60% of the less-than-best 3.5% differ in bit 52.9 or
better (not quite as good as <= 1/2 LSB error, but very close).

I=32637 X=3.9560696773363744 JS=-0.7273668062151653 DIFF=1 DIFF_BIT=52.29763579599085
I=79237 X=0.8148480919992233 JS=0.7276214013069926 DIFF=1 DIFF_BIT=52.30849802419767
I=41201 X=3.938965928529295 JS=-0.7155235612667885 DIFF=1 DIFF_BIT=52.31722259367971
I=86171 X=0.8237984449744515 JS=0.7337319286646813 DIFF=1 DIFF_BIT=52.33222461089536
I=25522 X=0.8016700451815177 JS=0.7185186216704451 DIFF=1 DIFF_BIT=52.346651976862574
I=40032 X=0.790774837103612 JS=0.7108984245905208 DIFF=1 DIFF_BIT=52.354904112307835
I=88047 X=2.318247443609258 JS=0.7334239077478022 DIFF=1 DIFF_BIT=52.35761529740677
I=48461 X=3.9838462845100873 JS=-0.7461454425946215 DIFF=1 DIFF_BIT=52.36189945797103
I=80216 X=3.9404332895148366 JS=-0.7165478722427796 DIFF=1 DIFF_BIT=52.37790754050627
I=21340 X=5.420612970523958 JS=-0.7595183425704337 DIFF=1 DIFF_BIT=52.38357992879765
I=51429 X=2.3146280635599377 JS=0.7358794501034085 DIFF=1 DIFF_BIT=52.387635477244615
I=28527 X=3.9381042375310753 JS=-0.7149213281902749 DIFF=1 DIFF_BIT=52.39299595998062
I=99339 X=0.8543763415163073 JS=0.7541615131752235 DIFF=1 DIFF_BIT=52.40938088927967
I=83598 X=0.8409599756689713 JS=0.7452835248310656 DIFF=1 DIFF_BIT=52.42299997290726
I=9785 X=3.930691839054137 JS=-0.7097189501981258 DIFF=1 DIFF_BIT=52.42603455983201
I=60120 X=0.7916658770902261 JS=0.7115248059947332 DIFF=1 DIFF_BIT=52.42653997531993
I=80495 X=0.815625780608817 JS=0.7281546592231973 DIFF=1 DIFF_BIT=52.42969192155701
I=43908 X=2.320224520564146 JS=0.732078514525325 DIFF=1 DIFF_BIT=52.430604340892884
I=97513 X=5.468327535698082 JS=-0.7276280411938323 DIFF=1 DIFF_BIT=52.43774426744243
I=93736 X=3.9548824790724066 JS=-0.7265515803344016 DIFF=1 DIFF_BIT=52.44629033285493
I=54096 X=2.301084536322577 JS=0.7449821732161468 DIFF=1 DIFF_BIT=52.447297317063565
I=1848 X=0.7913461257277792 JS=0.7113000928199256 DIFF=1 DIFF_BIT=52.44907333491967
I=42548 X=0.8522270551788566 JS=0.75274835971827 DIFF=1 DIFF_BIT=52.45100844990661
I=22794 X=5.468943780806544 JS=-0.7272051762376308 DIFF=1 DIFF_BIT=52.451184971236664
I=19000 X=0.8264347894946129 JS=0.735520612240735 DIFF=1 DIFF_BIT=52.458295907806296
I=19255 X=3.9794986002917336 JS=-0.7432437856090846 DIFF=1 DIFF_BIT=52.45837813367417
I=89970 X=0.79663855768769 JS=0.7150101030928067 DIFF=1 DIFF_BIT=52.46692161493054
I=70829 X=0.8089059572588516 JS=0.7235324003047141 DIFF=1 DIFF_BIT=52.470433150554314
I=55802 X=5.479606418745674 JS=-0.7198449270584104 DIFF=1 DIFF_BIT=52.4735675637244
I=42936 X=0.8364386363534035 JS=0.7422613248871652 DIFF=1 DIFF_BIT=52.47838940852189
I=62358 X=5.490352823681565 JS=-0.712344050426527 DIFF=1 DIFF_BIT=52.48386269629281
I=32953 X=3.930496394761469 JS=-0.7095812490857488 DIFF=1 DIFF_BIT=52.48982844195676
I=56144 X=0.8127165859054826 JS=0.7261575812313941 DIFF=1 DIFF_BIT=52.490692373293825
I=91382 X=2.315178755388615 JS=0.7355064581799131 DIFF=1 DIFF_BIT=52.49166538514291
I=27691 X=0.8249464382729078 JS=0.7345114365099299 DIFF=1 DIFF_BIT=52.49559438883785
I=73184 X=0.8046438214323901 JS=0.7205837259140035 DIFF=1 DIFF_BIT=52.50088374159816

The "bit" here is the negative base-2 logarithm of the difference
(true - approx) divided by the most-significant bit of the true
result. Thus, in three-bit FP, 1.00 through 1.11 all have MSB 1, and
approx 1.11 differs from exact 1.101 by .001, which is bit 3, even
though the rounded 3-bit result is 1.10 (choose even if tie).

I ran the same test with Math.sin using Sun's VM on Pentium (it uses
the instructions, I think) and found that for a small number of inputs
the error was slightly larger than 1 LSB.

I=82185 X=6.032320417883365 JS=-0.2482418685993428 DIFF=1 DIFF_BIT=51.63134295959887
I=8354 X=6.031236098734431 JS=-0.2492921003387706 DIFF=1 DIFF_BIT=51.67212720246923
I=21613 X=6.032379343948488 JS=-0.24818478660210988 DIFF=1 DIFF_BIT=51.71139032554909
I=22773 X=6.03298164448041 JS=-0.24760128544125815 DIFF=1 DIFF_BIT=51.73187835236121
I=84175 X=3.0163224184509514 JS=0.1249428555040138 DIFF=1 DIFF_BIT=51.734650687258416
I=20052 X=6.031155212374319 JS=-0.24937043216658547 DIFF=1 DIFF_BIT=51.770569441448814
I=36440 X=3.016403087261823 JS=0.12486281841238069 DIFF=1 DIFF_BIT=51.83446115738876
I=85594 X=6.031365638237976 JS=-0.24916664852118514 DIFF=1 DIFF_BIT=51.83924714975067
I=36781 X=6.032333143227012 JS=-0.24822954156325583 DIFF=1 DIFF_BIT=51.87453448030687
I=94584 X=6.031159271703738 JS=-0.24936650107674851 DIFF=1 DIFF_BIT=51.90629983665104
I=73276 X=6.032758856601906 JS=-0.24781713000750571 DIFF=1 DIFF_BIT=51.92200128768166
I=36744 X=6.033084093535974 JS=-0.24750202514366998 DIFF=1 DIFF_BIT=51.92813739380912
I=14643 X=6.0328104205827096 JS=-0.24776717414139054 DIFF=1 DIFF_BIT=51.94828138838917
I=61814 X=6.031925947542007 JS=-0.24862397193856747 DIFF=1 DIFF_BIT=51.97394136201433

This indicates a bug of some sort, not quite sure what the root cause
is, but it's not within the specified bounds for Math.sin.

13.5% of the results are not within 1/2 LSB of the correct result
using the instructions. Only 20% of that 13.5% differs in bit 52.9 or
better. So, the fdlibm implementation is definitely producing better

I've included the source at

I'm a little curious about one thing -- does anyone, besides a
specifications- and-exact-correctness weenie like myself, actually
care about this level of detail in machine floating point?

David Chase

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.