Out of range floating point number determination



Raymond Toy <toy.raymond at gmail.com> wrote:

   On Mon, Aug 13, 2012 at 10:50 AM, Steve Haflich <smh at franz.com> wrote:
   
       Whatever else, everyone don't forget that the behavior of float
       underflow needs to be checked and perhaps controlled on each platform in
       addition to float overflow.
   
   Yes, we already try to take care of this so that underflows silently
   underflow to zero.? At least there's special code for clisp and abcl
   for this.? I assume other lisps default to this mode already.

Yes, many implementations do this, but it isn't conformant with the ANS.
ANSI CL has two error condition classes floating-point-overflow and
floating-point-underflow.  ANS 12.1.4.3 requires this error be signalled
in safe code (where safe code is code compiled with safety 3):

 12.1.4.3 Rule of Float Underflow and Overflow

 An error of type floating-point-overflow or floating-point-underflow
 should be signaled if a floating-point computation causes exponent
 overflow or underflow, respectively.

The term "should be signalled" is a term of art in the ANS Section 1.4.2
defines:

 - An error should be signaled

 This means that an error is signaled in safe code, and an error might be
 signaled in unsafe code. Conforming code may rely on the fact that the
 error is signaled in safe code. Every implementation is required to
 detect the error at least in safe code. When the error is not signaled,
 the "consequences are undefined" (see below). For example, "+ should
 signal an error of type type-error if any argument is not of type
 number."

Here is typical nonconforming behavior from one popular implementation.a

* (funcall (compile nil '(lambda (x y)
                           (declare (optimize (safety 3)))
                           (* x y)))
           1e-23 1e-23)
0.0
* (zerop *)
T

An implementation could of course define that its "undefined
consequences" is the return of an appropriate zero, but the situation in
practice is actually even a little more convoluted.

When happens in the silicon when a floating point overflow or underflow
happens is platform dependent, and often controlled by some mode bits in
the fpu.  Now, a nice Lisp implementation can set those bits (if
allowed, and also any other bits controlling rounding and denormalized
behavior) but difficulties happen if there is other code loaded into the
image.  Sloppy programmers write sloppey compilers and other sloppy
routines that muck with these bits and do not restore them properly.
(It can be expensive.)  I expect things were a lot worse 20 years ago,
but I suspect lisp implementations still sometimes find altered behavior
in edge cases when some foreign code has been loaded into the image and
called.

A cautious application can check (e.g. for multiplication underflow)
simply by testing a result for zero, and if neither of the original
arguments were zero, signalling floating-point-underflow.  This costs a
few extra instructions and time, of course.