R Fateman wrote:
> Raymond Toy wrote:
>> This result is why I don't like computing the input this way. It is
>> quite a bit off from the original input number. Perhaps we can get more
>> accurate results if we temporarily increase fpprec, but some care needs
>> to be taken.
>
> I'm pretty such that one can easily compute the number of extra bits
> needed to do this all in floating point.
> If the target precision (total) is N, and we need to compute 10^k to
> precision N, we would use about log[2](k) multiplications.
> So we would get epsilon*k maximum roundoff error, where epsilon is one
> unit in last place.
> Carry that many digits: log[2](k).
Curiously, I was already doing similar experiments. For 1.234b567, I do
the computation of bfloat(1234/1000) and 10b0^567 with extra precision.
This gives results that are within 1 bit of the original result. Not
bad. I should probably do the computation as bfloat(1234) and
10b0^(567-3). And make the extra bits depend on the exponent.
I think the only issue with your idea is how 10b0^k. It might be doing
exp/log so I'm not sure how accurate that would be. Maybe computing
10^k by repeated squaring/multiplications would be better.
More experiments needed.
Ray