ECL? was ..Re: Runtime determination of share directories?



Richard Fateman wrote:
> Michael Abshoff wrote:
>> ....snip...

Hello rjf,


>> One thing about the bigfloat package in Maxima that concerns me a lot 
>> is that for example for many special functions you compute additional 
>> digits and then truncate the output to the requested precision to make 
>> the test suite pass on various lisps. This is a bad idea since it only 
>> takes a little of bad luck to find a case where numerical instability 
>> gives you incorrect results. 
> I'm not a big fan of GCL myself. I use it only for Maxima.  I write code 
> using another lisp and move it to
> Maxima when I think it is mostly debugged. I find it hard to use the 
> underlying lisp support of GCL.
> Similarly for CMUCL/SBCL and CLISP
> But these may be reactions from an old fogey and to older versions 
> (pre-SLIME, for example). And maybe
> ECL is now better than when I tried it.  I believe that Maxima can run 
> in another (free, but not open source) lisp,
> namely Allegro's Express, which is what I use.
> 
> But your comments on bigfloats surprise me, since I wrote (the 
> original...) version of it.

Mhhh, maybe I did not name the right package? I am referring to the 
recent work of Dieter Kaiser implementing more special functions and I 
do recall him increasing the number of bits used internally for some 
computations to ensure identical results on various lisps.

> Different results???
> 
> Do you have such an example?  I find it unlikely that such would be the 
> case since the bigfloat package uses integer arithmetic,
> and each implementation should get the same answers to every bit. Any 
> numerical instability would occur in every system.

As mentioned above I would need to dig into my email archive. But if 
bigfloat sits on top of integer arithmetic any potential deviation would 
be a great cause of concern and this is unlikely to be the case.

> If there are two lisp systems that get different integer results, then 
> (at least ) one of them is just wrong.

Yes :)

> In the bigfloat package there may be the occasional floating point 
> arithmetic to get approximations for the rough magnitude of the number 
> of extra bits needed, but not the actual value of the bits.
> 
> 
>> Using MPFR for example would get rid of that clutch and also unburden 
>> you from the "boring" task of maintaining that code. Given the stellar 
>> performance as well as excellent code quality of MPFR I would assume 
>> that this is another nail in the coffin for gcl support for Maxima.
> 
> I use MPFR myself, but that relies on lots of other stuff, including C 
> compilers, diagnosis of what computer is being
> targeted, assemblers, ... .. If you use the  C-only [no assembler] code, 
> it is maybe 10X slower.

MPFR itself is pure C, but MPFR relies on GMP for the underlying 
arithmetic which is partially written in assembler. But I don't see how 
requiring MPFR and GMP present on the system would be an issue since 
building a lisp from sources is often harder. Making it optional and 
falling back to a pure lisp implementation would obviously be a good 
idea for small devices like PDAs since I guess you want to make Maxima 
as sleek as possible on those devices.

> Bigfloat may be slower, but it relies only on arbitrary precision 
> integer arithmetic, part of every common lisp system.
> 
>>  In my personal experience I have seen libraries like quaddouble that 
>> rely on the IEEE conformance of the double arithmetic of the hardware 
>> to behave *extremely* badly, i.e. I can provide an example where the 
>> output on an x86 running Solaris is correct for 172 out of 212 bits 
>> when computing the number of partitions of 10^5. 
> Do you mean that the quad double library gets different answers on X86 
> on Solaris versus X86 on Windows?
> If your example is simply of a bad algorithm (for example, one that 
> needs more bits of precision), then what is
> the point?  I would encourage the use of quaddouble.  In fact it can be 
> used for integer arithmetic, probably faster than anything else, for a 
> certain range of integers.

Well, take the latest official release, build it on Solaris running on a 
  x86 cpu and run make check. It failed its test suite on every x86 
based Solaris box I tried and that is a bad, bad thing. If you use 
quaddouble to do numerical work this is less of a issue IMHO, but I see 
little benefit from getting potentially wrong results anywhere from 10 
to 50% more quickly than MPFR if you want identical results on any 
platform which MPFR does deliver. And by the way: quaddouble is released 
under the BSD license by researchers working for LBNL and U.C. Berkeley, 
but according to

   "http://crd.lbl.gov/~dhbailey/mpdist/";

one can read that


   "Incorporating this software in any commercial product requires a 
license agreement"


Maybe someone ought to clue these people in what it means to release 
software under the BSD license. And I am sure someone should point them 
to the wikipedia page about BSD to make 100% sure that they will 
appreciate the irony.

> 
>> On the same platform using quaddouble the number of partitions for the 
>> first five hundred integers is incorrect in about half the cases, so 
>> any time you are using some extended precision library that is not 
>> proven to be correct and vigorously I get very nervous.
>>   
> Maybe the algorithm requires more precision?

No, the problem with quaddouble is that it requires at least on x86 to 
precisely set the FPU control word, i.e. rounding mode and so on. On 
PowerPC or Sparc this is not possible to my recollection, but in our 
experience arithmetic operations there also deliver less than the 212 
bits promised. I have even seen cases where a single multiplication of 
two numbers (and we did not attempt to hit a corner case) produced 
results that were different in the last three of four bits.

If the documentation tells me that I get 212 bits of precision then it 
should not matter which IEEE conformant CPU I am running the code on 
(modulo compiler bugs), but quaddouble does for the purpose the Sage 
project uses it not live up to the standard of reproducibility. That 
does not mean it is not useful for other projects, but at least Sage is 
not in the business of delivering potentially worng results 50% faster. 
AFAIK the issue is known to quaddouble developers and Carl Witty and I 
discussed the possibility of attempting to fix it by working around 
potential miscompilations last night, but this is a waste of time since 
even if we get it to work for some examples it will still not even come 
close to the assurance that MPFR gives me. And correctness should always 
come before speed in any software project.

> RJF
> 
> 

Cheers,

Michael