I'm traveling and not able to spend time on this, but I think it is an
example of the
Table-Maker's Dilemma.
How many digits do you need to compute in order to get the nth digit
right?
If you are nearly half way between 2 numbers, it is a puzzle.
Anyway, bfloat does not guarantee "all" digits right. within about half
a unit in the last place.
But consistency is another issue, I think.
RJF
Raymond Toy wrote:
> Dieter Kaiser wrote:
>
>>> >From the testsuite, the hash table has 75 entries, ranging from 56 bits
>>> to 1681 bits.
>>>
>>>
>> A lot of entries might be caused, because I have tried to check the
>> numerical algorithm of the beta and gamma functions over a greater range
>> of values for fpprec.
>>
>>
> Yes, and that's good. I figure the testsuite is probably more than what
> a typical user would do.
>
>>> There is another approach that might work, but I suspect there are cases
>>> where it would also give incorrect results. We can continue to use the
>>> current algorithm, but if fpprec is too close to the max saved
>>> precision, we recompute %e to, say, twice fpprec and round that.
>>>
>>>
>> I think most important is to have a bigfloat arithmetic, which is
>> correct and consistent. The time I have implemented the bigfloat
>> algorithm for special functions I had no idea that we can get
>> inconsistencies because of the bigfloat implementation.
>>
>>
> Agreed. Computations should be repeatable. I was also surprised about
> the inconsistency, and was shocked to learn it was caused by the caching
> in fpe. I've never seen an example of this in maxima.
>
> I'm not happy with the test failure for rtest_gamma 644. I'm willing
> to attribute it to a change in the value of %e, but I don't like it.
> I'm sure there are other cases where we will get different results from
> before, but the new results should be repeatable.
>
> Ray
>
> _______________________________________________
> Maxima mailing list
> Maxima at math.utexas.edu
> http://www.math.utexas.edu/mailman/listinfo/maxima
>