I jumped on Rich's reply a bit too fast, and said it was wrong (which it
is not if interpreted correctly) and then didn't explain myself very
clearly. Sorry about that.
He said:
> > If you set it to less than one-unit-in-the-last-place
> > (ULP) of the single or double float representation, then you
> > get the version that Stavros wants.
But in my haste, I didn't interpret this charitably (patching minor bugs
and ambiguities).
If I interpret that to mean that you should set ratepsilon to 1/2 ulp of
the *target* format (not machine singles or doubles) -- that is,
2^-(binary_fpprec+1) -- then yes, this works in theory. (Binary_fpprec
is actually called ?fpprec.)
But the current rationalization code calculates in machine floats, and
so doesn't work for ratepsilon < 1/2 ulp of machine float (actually, I
haven't done a detailed error analysis -- for all I know, it may
actually only be good to within 1.5 ulp or something given rounding
errors). To make it work for 1/2 ulp of arbitrary precision bfloats,
you'd have to calculate either in bignum rationals or in bfloats. And
to do that you would have to somehow extract the exact mantissa anyway
to start with a binary fraction xxxxx/2^53. So after having extracted
that exact mantissa, why would you bother to use the Hardy-Wright
algorithm (a dozen-plus bignum divisions) to derive the rational within
2^-(binary_fpprec+1) of the exact binary fraction, only to divide out to
get a binary fraction again? Wouldn't it be easier just to pad the
binary fraction?
-s