from the 4,195,835/3,145,727=1.333739068902037589 dept.
Software developer Bruce Dawson has pointed out some issues with the way the Intel FSIN instruction is described in the "IntelĀ® 64 and IA-32 Architectures Software Developer's Manual," noting that the result of FSIN can be very inaccurate in some cases, if compared to the exact mathematical value of the sine function.
Dawson says, "I was shocked when I discovered this. Both the FSIN instruction and Intel's documentation are hugely inaccurate, and the inaccurate documentation has led to poor decisions being made. ... Intel has known for years that these instructions are not as accurate as promised. They are now making updates to their documentation. Updating the instruction is not a realistic option."
Intel processors have had a problem with math in the past (1994), too.
(Score: 1) by deimios on Saturday October 11 2014, @07:04AM
Who needs precision when you got speed? Benchmarks don't check accuracy.
(Score: 2) by maxwell demon on Saturday October 11 2014, @07:14AM
There are also programs testing floating point accuracy/correctness:
http://www.math.utah.edu/~beebe/software/ieee/ [utah.edu]
The Tao of math: The numbers you can count are not the real numbers.
(Score: 4, Informative) by maxwell demon on Saturday October 11 2014, @07:36AM
Copy/pasting the code from the article into an editor doesn't give compilable code (due to replacements of ASCII quotes and dashes with typographic ones).
Here's a copy/pasteable version of that code:
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by Lagg on Saturday October 11 2014, @11:27AM
Yeesh, I was expecting some kind of complex operation but this is just your basic sine. In "real life" for the most part this won't hurt anything but I feel pretty bad for the workarounds that number crunching places must have to use. At the very least they disable glibc usage of FSIN.
http://lagg.me [lagg.me] 🗿
(Score: 2) by frojack on Monday October 13 2014, @08:36PM
But isn't that common anyway (disabling FSIN instruction)?
Affter all, this inaccuracy has been known for years.
No, you are mistaken. I've always had this sig.
(Score: 2) by Lagg on Monday October 13 2014, @09:43PM
I would think so, at least in release builds for the sake of portability but since the code uses the stdlib sin function apparently that isn't the case.
http://lagg.me [lagg.me] 🗿
(Score: 2) by maxwell demon on Saturday October 11 2014, @07:59AM
Just to add another data point: On AMD (/proc/cpuinfo model name: AMD Athlon(tm) 64 X2 Dual Core Processor 6000+) I also get exactly the Intel values for fsin (but the correct result for <math.h> sin, glibc 2.13/gcc 4.6.3).
The Tao of math: The numbers you can count are not the real numbers.
(Score: 0) by Anonymous Coward on Saturday October 11 2014, @12:26PM
Maybe Intel and AMD are right, and everyone else is wrong.
(Score: 2, Insightful) by Anonymous Coward on Saturday October 11 2014, @03:10PM
Or AMD did a 1 to 1 compatibility thing. From what I understand it depends on how many digits of precision you start off with in PI will depend on what you get for this calculation.
People are 'shocked' that the cpu gets it wrong. We do expect it to be 'close' or even 100% right. The thing is they have a huge body of errata where things go weird. Things like do particular instructions then a pop and you end up with the wrong result. That the x87 chip in 1987 took some short cuts is not surprising (thats how old this is). Intel and AMD both have a problem how do you fix it but not break a bunch of code that relied on the bug?
My college professor Dr Klarner had it right. "I HAAAAAAAAATES floating point numbers" They are bitch to get just right in computers.
(Score: 0) by Anonymous Coward on Saturday October 11 2014, @09:28AM
Not trying to troll, but man...that chip never let me down.
(Score: 0) by Anonymous Coward on Saturday October 11 2014, @12:30PM
SPARC was designed by geniuses, and the manliest of men and the womanliest of women. SPARC is what the pros choose.
(Score: 0) by Anonymous Coward on Saturday October 11 2014, @11:03AM
Intel denied there was a problem for months. Until after all their new chips were out.
THEN they went back and offered to replace the old defective chip with a now obsolete fixed chip.
Since that episode i've gone with AMD.
And been quite happy. Not had to worry about my cpu being 'right' at all.
And saved a ton of money too.
And i think i'll stick to AMD for the forseeable future as well.
(Score: 2) by kaszz on Saturday October 11 2014, @11:12AM
CPUs from AMD tend to be very low on on-chip cache (L). So for some memory intensive applications AMD is a slow choice.
(Score: 0) by Anonymous Coward on Saturday October 11 2014, @09:24PM
Good plan. We using your money or my money?
If it's my money i'll take the amd thanks.
(Score: 2) by hamsterdan on Saturday October 11 2014, @02:47PM
Things would have been different if internet access was mainstream back then. Intel would have reacted much faster...
(Score: 3, Interesting) by meisterister on Saturday October 11 2014, @10:35PM
I'd expect this to be in the newer Intel micro-architectures, like Sandy bridge and Haswell, but how far back does it go? Someone mentioned that their Athlon 64 had the same problem. Is it still in Bulldozer/Piledriver?
I'm genuinely interested.
(May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
(Score: 1) by hendrikboom on Sunday October 12 2014, @01:28AM
The mathematically tractable way to discuss floating-point precision is to treat each operation as producing an exact answer for real arguments that are close to the given arguments, and then bound the error in the arguments, as it were. I wonder how FDIV would fare under this kind of analysis.
-- hendrik