ECL? was ..Re: Runtime determination of share directories?
Subject: ECL? was ..Re: Runtime determination of share directories?
From: Richard Fateman
Date: Fri, 23 Jan 2009 11:21:04 -0800
Michael Abshoff wrote:
> ....snip...
>
> One thing about the bigfloat package in Maxima that concerns me a lot is
> that for example for many special functions you compute additional
> digits and then truncate the output to the requested precision to make
> the test suite pass on various lisps. This is a bad idea since it only
> takes a little of bad luck to find a case where numerical instability
> gives you incorrect results.
I'm not a big fan of GCL myself. I use it only for Maxima. I write code
using another lisp and move it to
Maxima when I think it is mostly debugged. I find it hard to use the
underlying lisp support of GCL.
Similarly for CMUCL/SBCL and CLISP
But these may be reactions from an old fogey and to older versions
(pre-SLIME, for example). And maybe
ECL is now better than when I tried it. I believe that Maxima can run
in another (free, but not open source) lisp,
namely Allegro's Express, which is what I use.
But your comments on bigfloats surprise me, since I wrote (the
original...) version of it.
Different results???
Do you have such an example? I find it unlikely that such would be the
case since the bigfloat package uses integer arithmetic,
and each implementation should get the same answers to every bit. Any
numerical instability would occur in every system.
If there are two lisp systems that get different integer results, then
(at least ) one of them is just wrong.
In the bigfloat package there may be the occasional floating point
arithmetic to get approximations for the rough magnitude of the number
of extra bits needed, but not the actual value of the bits.
> Using MPFR for example would get rid of
> that clutch and also unburden you from the "boring" task of maintaining
> that code. Given the stellar performance as well as excellent code
> quality of MPFR I would assume that this is another nail in the coffin
> for gcl support for Maxima.
I use MPFR myself, but that relies on lots of other stuff, including C
compilers, diagnosis of what computer is being
targeted, assemblers, ... .. If you use the C-only [no assembler] code,
it is maybe 10X slower.
Bigfloat may be slower, but it relies only on arbitrary precision
integer arithmetic, part of every common lisp system.
> In my personal experience I have seen
> libraries like quaddouble that rely on the IEEE conformance of the
> double arithmetic of the hardware to behave *extremely* badly, i.e. I
> can provide an example where the output on an x86 running Solaris is
> correct for 172 out of 212 bits when computing the number of partitions
> of 10^5.
Do you mean that the quad double library gets different answers on X86
on Solaris versus X86 on Windows?
If your example is simply of a bad algorithm (for example, one that
needs more bits of precision), then what is
the point? I would encourage the use of quaddouble. In fact it can be
used for integer arithmetic, probably faster than anything else, for a
certain range of integers.
> On the same platform using quaddouble the number of partitions
> for the first five hundred integers is incorrect in about half the
> cases, so any time you are using some extended precision library that is
> not proven to be correct and vigorously I get very nervous.
>
Maybe the algorithm requires more precision?
RJF