Runtime determination of share directories?



Aleksej Saushev wrote:
> Robert Dodier <robert.dodier at gmail.com> writes:
> 
>> I believe it is within the realm of possibility that we might stop
>> working around GCL's idiosyncrasies.
>>
>> I think it would be worth the time to compare Maxima + ECL against
>> Maxma + GCL on various platforms, considering ECL as a possible
>> replacement for GCL. If someone wants to do that & report to the
>> mailing list, that would be terrific.
> 
> ECL seems to work on more platforms and is in active development,
> GCL doesn't work on NetBSD at least and looks abandoned.
> On this acount I don't even try to revive GCL.
> 
> 

Well, AFAI understand gcl is the only lisp supported by Maxima that does 
not support ffi and hence prevents you from using things like MPFR, GSL 
or native BLAS, so if I had to made a decision as an outsider it would 
be a no brainer provided ecl worked (which I believe it does). The ecl 
maintainer has been extremely responsive to issues in ecl exposed by 
Maxima as well as been more that willing to fix performance bottle necks 
in case they were pointed out to him.

Obviously at least some people have a emotional connection to gcl due to 
the history between Maxima and gcl.

One thing about the bigfloat package in Maxima that concerns me a lot is 
that for example for many special functions you compute additional 
digits and then truncate the output to the requested precision to make 
the test suite pass on various lisps. This is a bad idea since it only 
takes a little of bad luck to find a case where numerical instability 
gives you incorrect results. Using MPFR for example would get rid of 
that clutch and also unburden you from the "boring" task of maintaining 
that code. Given the stellar performance as well as excellent code 
quality of MPFR I would assume that this is another nail in the coffin 
for gcl support for Maxima. In my personal experience I have seen 
libraries like quaddouble that rely on the IEEE conformance of the 
double arithmetic of the hardware to behave *extremely* badly, i.e. I 
can provide an example where the output on an x86 running Solaris is 
correct for 172 out of 212 bits when computing the number of partitions 
of 10^5. On the same platform using quaddouble the number of partitions 
for the first five hundred integers is incorrect in about half the 
cases, so any time you are using some extended precision library that is 
not proven to be correct and vigorously I get very nervous.

Cheers,

Michael