On 5/10/12 5:01 AM, Barton Willis wrote:
> With GCL:
>
> (%i63) cos(1.0d97);
> (%o63) 0.44800964805919
>
> With Julia or Clozure CL:
>
> julia> cos(1.0e97)
> 0.7496172085944125
>
> $ wx86cl64
> Welcome to Clozure Common Lisp Version 1.8-r15286M (WindowsX8664)!
> ? (cos 1.0d97)
> 0.7496172085944125D0
>
> And bigfloats:
>
> (%i60) cos(1.0b97), fpprec : 2000;
> (%o60) -7.0797267715593222213703559504[1944 digits]888604264163488232493341331b-2
>
> Let me guess that Clozure & Julia turn the calculation entirely over to the (Intel) microprocessor. What on earth does GCL do?
> Is this a binary32 / binary64 confusion problem?
>
> No, I'm not surprised that the Clozure & Julia values are likely completely wrong--it just makes me wonder what GCL is doing.
There are a few issues here. First, Lisp has to read 1d97 and convert
that to a float. That could account for all of the difference. Second
is how range reduction is done. Third, 1b97 is not the same as 1d97.
Try cos(bfloat(1d97)); the result from maxima will match ccl very closely.
I suspect that the intel fsin (fcos?) instruction is not used. Any
number above 2^63 (~ 1.8d19) is too large for that instruction and I
think it just returns 0. (I think.) The chip only has about 66 bits of
pi for range reduction. I don't know about ccl, but cmucl does accurate
range reduction using upto 1500 (?) bits of pi. It may also happen that
the other lisps call the C library, which may also do accurate range
reduction using many bits of pi.
Perhaps a better test would be (cos (scale-float 1d0 120)). There
should be no problems reading 1d0 in any lisp.
Ray