adding floats. a peculiarity, and a proposal



>  ALL user floating-point format input be stored as infinitely precise
> rational numbers until there is a first conversion to something
> that is used computationally, at which point it is converted to the
> appropriate type, precision, etc.

I am not sure exactly what you are proposing.  If 0.1b0 is stored as
1/10 but converted when it is "used computationally", then you will
get the same results as above, but with a large overhead:
sum(0.1b0,i,1,10) would have to convert 1/10 to binary form 10 times.

And what is the "appropriate type"?  The value of fpprec at the time
of conversion?  That is the current semantics for converting exact
numbers to bfloats.

Perhaps the proposal is that bfloat constants be represented as
decimal floating-point?  That certainly has advantages (and
disadvantages).  And I believe that bfloat already supports base-10
internally in some cases at least.

As for the specific examples of sum, I wonder if they would come out
differently if we fixed the known bugs in bfloat?  There are
documented cases where bfloat does not round correctly.

By the way, I was curious to see what would happen to computation
times if I used exact (rational) arithmetic (sum(1/i,i,1,N)) vs.
bfloat (sum(1.0b0/i,i,1,N)).  I expected that with the growth in
denominator size, rational would quickly become much more expensive
than bfloat (default fpprec=16).  In fact, the crossover is at about
N=10000.

               -s