maxima-bounces at math.utexas.edu wrote on 05/12/2010 12:44:06 PM:
> On Tue, May 11, 2010 at 09:59, Robert Dodier <robert.dodier at gmail.com>
wrote:
> ...There is a different interpretation of the float function, which
> I think I would prefer, namely float(foo(x)) should return the
> floating point number closest to the numerical value of foo(x).
> ...
> The latter is obviously more work
>
> That's an understatement! I suppose if we had arbitrary-precision
> interval arithmetic (which we don't) we could calculate foo(x) at
> ever increasing precisions until the interval was small enough
> (either 1e-16 relative error or 2e-323 absolute error near zero).
> But how often are such heroic measures really useful to the user?
>
-s
(%i1) load(hypergeometric)$
(%i4) nfloat(sin(2^2048));
Unable to evaluate to requested number of digits
(%i6) nfloat(sin(2^2048)), max_fpprec : 3000;
(%o6) 6.718229754839927b-1
Compute sin(2^2048) to 50 digits:
(%i7) nfloat(sin(2^2048),[], 50), max_fpprec : 3000;
(%o7) 6.7182297548399265774602786169483936257135898134642b-1
For poorly conditioned hypergeometric sums, the hypergeometric code
uses a running error to bound the error. When the error is too large,
the sum is recomputed using a larger value for fpprec. For some
analytic continuations of the 2F1 functions, I needed more than just a
running error in the hypergeometric sums routines. Thus the function
nfloat.
Algorithmically, interval arithmetic is easy to implement, but CL
doesn't support setting the rounding mode and there are no functions
for setting the rounding mode for Maxima big floats. Inserting a
running error into numerical code is generally easy, but unlike true
interval arithmetic, the running error drops errors that are
O(machine epsilon^2).
See also http://www.unk.edu/uploadedFiles/facstaff/profiles/willisb/hg.pdf
--Barton
> _______________________________________________
> Maxima mailing list
> Maxima at math.utexas.edu
> http://www.math.utexas.edu/mailman/listinfo/maxima