On 5/12/10 2:16 PM, Barton Willis wrote:
> maxima-bounces at math.utexas.edu wrote on 05/12/2010 12:44:06 PM:
>
>
>
>> On Tue, May 11, 2010 at 09:59, Robert Dodier <robert.dodier at gmail.com>
>>
> wrote:
>
>> ...There is a different interpretation of the float function, which
>> I think I would prefer, namely float(foo(x)) should return the
>> floating point number closest to the numerical value of foo(x).
>> ...
>> The latter is obviously more work
>>
>> That's an understatement! I suppose if we had arbitrary-precision
>> interval arithmetic (which we don't) we could calculate foo(x) at
>> ever increasing precisions until the interval was small enough
>> (either 1e-16 relative error or 2e-323 absolute error near zero).
>> But how often are such heroic measures really useful to the user?
>>
>>
> -s
>
>
> (%i1) load(hypergeometric)$
>
>
>
[snip]
> Compute sin(2^2048) to 50 digits:
>
> (%i7) nfloat(sin(2^2048),[], 50), max_fpprec : 3000;
> (%o7) 6.7182297548399265774602786169483936257135898134642b-1
>
Neat.
But for the trig functions, we don't really need something complicated.
We just need many bits of pi (or 1/pi), which maxima can already do.
Then just compute 2^2048 mod pi. We don't even need all the bits of
1/pi or 2^2048. You just need the least significant 50-100 bits or so
because the product of the rest would be an integer multiple of pi, so
they go away anyway.
Ray