ECL? was ..Re: Runtime determination of share directories?
Subject: ECL? was ..Re: Runtime determination of share directories?
From: Michael Abshoff
Date: Fri, 23 Jan 2009 17:33:56 -0800
Raymond Toy wrote:
> Michael Abshoff wrote:
>> Raymond Toy wrote:
Hi Raymond,
>>> But Maxima can't run without a lisp, but it does run well (quite well,
>>> IMNSHO) without MPFR/GMP. :-)
>>>
>> Absolutely, but given the periodically reappearing discussion about
>> easier use of external libraries written in C/C++/Fortran something
>> worthwhile to think about IMHO would be MPFR. Since I am not doing any
>> of the work this is merely a suggestion.
>>
> The best part of using an ffi is having my lisp die because I called the
> foreign function incorrectly. Or it decided an error happened and
> exiting is the best way. :-)
Well, great potential speedups do not come for free :)
> (But I do use FFI for stuff, as needed.)
>
>>> Do you have such an example? I'd really like to see it, since I
>>>
>> I need to dig it out of the Sage bug tracker and that will take some time.
>>
> I would certainly like it, but if it's too much work, that's ok.
Ping me off list in about a week if you don't hear from me. I know it
was a commit I did (or at least I bug I reported), so it should be
somewhat reasonable to find. If I got a power supply on the way home
trans-atlantic I will try to find it. I hope my recollection is not
wrong, but I do recall being very surprised to see the failure.
>> AFAIK the advantage if qd is that it sets rounding mode to 53 bits and
>> hence does not rely on 80 bit representation. There is a well known
>> paper that shows that compilers optimizing expression trees will cause
>> precision loss. But qd at least to my understanding should not be
>> affected by this issue.
>>
>>
> Setting the rounding mode to 53 bits isn't actually enough. I saw some
> Java numerics slide that explained this.
I do recall some paper discussing the problem, but I do not have time to
look into this. If you find the slides feel tree to send them to me off
list.
> Even though the precision was
> 53 bits, the exponent is still 16 bits, so numbers don't
> underflow/overflow as you would expect for double precision. The slides
> explained how to solve this problem, but it added a factor of 2-4 in
> execution or something like that.
Fair enough. But given that quaddouble is faster than MPFR it does sound
very much like it does not do this.
Thanks for the input, I will think about this and will hopefully come up
with a solution to make the quaddouble tests work. One potential problem
might be that some of the other FPU control flags are different on Linux
and Solaris per default for example and since quaddouble only sets the
rounding mode this might cause the discrepancy. Note that oddly on
Solaris there is only a kernel level interface to set FPU control words
and one has to resort to inline assembly to make it work in userspace.
Back in the day Linux had some problems with restoring FPU states on
context switches, but I would assume that Solaris 10 does not suffer
from this problem.
I have other code to maintain in Sage that uses the same algorithmic
tricks with doubles to have fast multi-double arithmetic and it also has
serious issues with precision and rounding and seems to expose Solaris
gcc specific bugs with higher optimization levels, so the problem seems
to be unrelated to the quaddouble specific implementation.
> Ray
Cheers,
Michael