float to bfloat



My preference is actually for bfloat(0.1) to become the same as
bfloat(1/10).  However I can live with the alternative.  In particular,
since the commercial Macsyma disagrees with me, and agrees with you,
that is a vote on your side.  My recollection is hazy on this one  (I wrote
the bigfloat package!) and I'm sure both ways were considered.
RJF


Stavros Macrakis wrote:

>(My email has been flaky today, so I may not have seen the whole thread
>and I may have seen it out of order.  I hope my comments make sense
>nonetheless.)
>
>Richard:
>  
>
>>Do you want [ bfloat(0.1) ] come out as the bigfloat version of
>>1/10 ... 1.0b-1 or do you want it to come out as the bigfloat
>>version of .99999994039535522...
>>The first of these requires a lot less explanation to most people.
>>In either case a careful explanation should be available so the
>>behavior is not viewed as a bug.
>>    
>>
>
>Ray:
>  
>
>>Not sure what the best solution would be.  We can change 
>>fixfloat so that it doesn't rebind ratepsilon....
>>    
>>
>
>For the case of 0.1, you could argue that Maxima's floating point should
>be *decimal*.  That way, 0.1 and 0.1b0 would really denote precisely
>1/10 (of course, that still doesn't explain non-decimal rationals).
>However, Maxima uses machine floats for its floating point and machine
>float is binary (as is bigfloat).  Yes, it is confusing to novice
>computer users that float(7/100)=0.07*100 does NOT equal precisely 7.0,
>but I'd think that by the time they're using Maxima, they'd understand
>this.
>
>So now, how do we explain to them that bfloat(0.07)*100 *does* equal
>precisely 7.0?  How did bfloat "correct" the value of 0.07 if 0.07
>doesn't really denote 7/100?  Why is it that bfloat(float(xxx)) is
>precisely equal to bfloat(xxx) when xxx is a simple rational number, but
>not when it is sqrt(2) or %pi or %e, which are arguably just as simple?
>
>Why do we have:
>
>   bfloat(3.333333333333333) => 3.3333333333333330373B0
>
>but
>
>   bfloat(.3333333333333333) =>     3.3333333333333333333B-1?
>   bfloat(3.333333333333333/10) =>  3.3333333333333333333B-1?
>
>even though they have the same number of decimal digits of precision?
>
>And that's just the decimal fraction case.
>
>How do we explain that numbers which are *exactly representable* in
>(binary) floating point get randomly munged in conversion to bfloat?
>Consider r:1155528851759535/2^53; bfloat(r) is not the same as
>bfloat(float(r)).  This is NOT an isolated example -- in fact, most
>floats between 0 and 1 get munged.
>
>What does the user do when he wants to use bfloats to investigate
>floating-point behavior?
>
>So I don't think that the exact conversion of the floating-point number
>"requires less explanation".  It does require *some* explanation, but it
>is cleaner and simpler and more useful than the current system of
>rational approximation.
>
>--------------
>
>One suggestion was to use ratepsilon to control this behavior.  That
>doesn't work in the current code, since ratepsilon is bound locally to
>float epsilon so that float(bfloat(float(xxx)))=float(xxx).  On the
>other hand, the global ratepsilon is normally some small multiple of
>float epsilon, to allow for rounding errors etc.  So you'd need a
>*separate* ratepsilon for the bfloat case (yuck).
>
>Also, ratepsilon < float epsilon doesn't actually work -- it reverts to
>float epsilon, so ratepsilon=0 does not give the exact result. (5.9.0
>gcl x86)
>
>I contributed an improved rationalize function to GCL last year -- I
>don't know what its status is.  Sigh.  One of these days I'll have to
>take the plunge into building and running the latest and greatest
>releases with CVS.
>
>        -s
>
>