Dear Prof. Fateman,
you convinced me. Personally I probably continue to assume that floating pointer numbers have an implicit +-0.5LSB error, but I understand that other people have other expectations, and I agree that a decent implementation should fulfill all these expectations. A faster reduced precision variant can be done as an option, but precision should be the default.
Best regards,
Michael
From: Richard Fateman [mailto:fateman at eecs.berkeley.edu]
Sent: Friday, May 11, 2012 6:18 PM
To: Soegtrop, Michael
Cc: maxima at math.utexas.edu
Subject: Re: [Maxima] cos(1.0d97) with GCL, Clozure, and Julia
On 5/11/2012 8:23 AM, Soegtrop, Michael wrote:
Dear Prof. Fateman,
my background is computational physics and there large exact powers of two are, well, rare. I understand your arguments, but I don't understand what application would justify the extra effort of supporting such high precision range reduction.
1. If a subroutine gives the correct answer except in rare cases, some people would say it is defective. If it can be fixed
so that it produces the correct result in all cases, although in rare cases it takes more time, then that is,
in my opinion, much better.
2. The fact that you or I don't know of an application where the rare cases occur does not mean it is OK to
(without warning) just give wrong answers.
3. The extra effort is (so far as I can tell) the matter of having the programmer do it right once, and using a few extra words
of memory. My understanding is that usually that code is not executed, so no one else is penalized in time.
Let's look at the case mentioned below of 2^120. Of cause you can represent this number exactly in double precision and you can calculate the sin to 53 bits. But an increment of 1LSB is more than 10^19 pi. I wonder what applications of calculating the sin of a large numbers exist, which jumps angles in increments of more than 10^19 pi.
Mathematicians do many odd things.
And even more I wonder if such applications are really so important, that they justify the extra effort, from which all applications would suffer?
How so? If you want another cosine routine that is faster and gives wrong answers, you can easily construct one that
does incorrect fast range reduction and then calls the existing routine. Or for that matter you could copy a Chebyshev
approximation or some other polynomial or rational computation. Such things are done for low-precision needs like
graphics.
Don't get me wrong, I fully support a range reduction with high fixed precision, but in order to process any representable power of two, a variable precision range reduction would be required, and this doesn't make sense in my opinion.
I think it is easy to implement a program to avoid variable precision range reduction if it is unnecessary.
Regarding the trap: this gives us a way to know which results are exact and which not, but this doesn't help to resolve the computational effort problem unless there would be a sin_exact function, which does support arbitrary precision range reduction, and another sin function which doesn't.
It is much easy to implement a sin_fast_and_sloppy function if you have a sin_exact function around.
The other way, not so easy.
Another problems with traps is, that they are not very feasible with SIMD or parallel processing.
The thought in the IEEE design was to allow for NaNs to serve in such cases. That is,
if one were truly requiring exact values (as, for example, simulating exact integer arithmetic
in floats, where inexact means overflow), then a parallel setup could benefit from inserting
a NaN, without interrupting the program flow.
The idea that it is OK to get the wrong answer because you got the answer fast in parallel
seems to me to be fairly common in some circles. I find this uncomfortable. I particularly
find it uncomfortable in the context of symbolic computing systems (when they are
extended to do numerics).
..snip..