Dear Ray,
yes, I darkly remember (I did a FP implementation once quite a few years ago). I need to look at some code to refresh my memory and to see how much effort it really is.
Best regards,
Michael
From: Raymond Toy [mailto:toy.raymond at gmail.com]
Sent: Friday, May 11, 2012 4:59 PM
To: Soegtrop, Michael
Cc: Barton Willis; maxima at math.utexas.edu
Subject: Re: cos(1.0d97) with GCL, Clozure, and Julia
On Fri, May 11, 2012 at 1:23 AM, Soegtrop, Michael <michael.soegtrop at intel.com<mailto:michael.soegtrop at intel.com>> wrote:
Dear Ray,
I slept over your arguments, but I still think they are not right. The problem I see is that floating point numbers can have very large exponents, e.g. more than 1000 binary digits for IEEE double precision. So worst case you need to do a range reduction with a precision of about 1000 bits for a floating point format with 53 bits mantissa. The effort for doing the range reduction, essentially a modulus operation, is quadratic (well there are asymptotically
Hope it didn't keep you awake too long!
But the effort is not really that large. What we really want to compute
x*(1/(2*pi)) = N + r
We don't care about N because, so we really want just r. We can break 1/(2*pi) into 3 parts. The first part contributes to N, the second part contributes to r and the third part contributes to r but is the tail of r past 53 bits. So most of the 1000+ bits of 1/(2*pi) aren't needed.
(In practice, I think 2/pi is used instead of 1/(2*pi).)
faster methods, but I think with SIMD and 1000 bits, the quadratic method is still best), so we have worst case ~400 times the effort, if we assume that numbers are precise. The question is, if this effort is worthwhile. I think no, because I think it is very unlikely, that
the result of some computation has a higher precision than the mantissa. The use case of
Sun obviously thought it was worthwhile many decades ago on much slower machines. You can find the (hairy) code that implements this on line and also in glibc.
The numbers don't even have to be near 2^53 to start showing inaccuracies. Numbers below 2^16 show differences already.
literal numbers is a bit artificial and not really what floating point numbers are made for.
Yes, the examples are artificial but the routine is just given a number, not a number and some indication of accuracy. The routine should return a result assuming the number is exactly that and no other. It's up to the user to figure out whether the accuracy of result has any meaning given the accuracy of the input.
Ray