replacing basic Fortran functions



>>>>> "Gregory" == Gregory Martin Pfeil <greg at technomadic.org> writes:

    Gregory> Playing with the slatec code a bit, I noticed that there are some
    Gregory> functions (like zabs) that already have a CL equivalent  and have
    Gregory> much better performance than the included translated-from-Fortran
    Gregory> code. I wrote a few macros that replace the Fortran in those cases,
    Gregory> along with a patch to maxima.system to integrate the macros.

Is zabs the bottleneck in some real application that you have?  If
not, then I care much less about it. :-)

    Gregory> What's the general policy on the Fortran, anyway? It seems like
    Gregory> translating it to Lisp is the worst of both worlds: it's not fast,
    Gregory> and it's hard to read. Hundreds of lines of SETF? Yikes. It seems
    Gregory> like two possible approaches are to compile the Fortran and link it
    Gregory> in using CFFI, or to leave it as it is and slowly transition the
    Gregory> Fortran to idiomatic CL. The CL code would then be both clear and
    Gregory> debuggable. I've translated a few of the functions, and they're often
    Gregory> 1/10th the length or less, while improving the speed.

I'm not sure there's any policy on Fortran, other than it was a way of
using well-known, proven algorithms with maxima.

If you don't like hundreds of lines of setf, you should also then
complain about the hundreds of lines of Fortran =.  :-)  

The intent was not that you read and modify the translated Lisp, much
like you normally don't read and modify the assembly output from gcc.
If you find issues, you're supposed to fix the Fortran code.

CFFI and friends is out of the question right now since there is no
CFFI for gcl.  (AFAIK).  But that's my opinion.  There are probably
differing opinions on this matter.

I have often thought about hand-translating the Fortran to Lisp, but
after a while, I always give up.  What's the point?  Here's something
that already works, and my translation wouldn't necessarily improve on
that.  I'd have to do lots of tests anyway, for which I have no test
and verification tools.

    Gregory> Using an FFI might give us better performance overall, but I think
    Gregory> the benefits of having easy-to-maintain code rank high, even when
    Gregory> talking about numerical code (and it's still faster than the f2cl
    Gregory> method).

As to speed, I haven't measured how any translated slatec routine
compares to Fortran, but I did compare MPFUN to an f2cl'ed version of
it.  The F2CL version was somewhere between 1.5 to 2.5 times slower in
computing thousands of digits of pi.  It turned out the bottleneck was
having to call TRUNCATE zillions of times, boxing up zillions of
numbers.  After fixing that (in f2cl itself, and some compiler
optimizations in CMUCL), I got the numbers above.  I think the major
bottleneck at that point was having to box all the floats when calling
and returning from functions.  There's no real Lisp way to get rid of
that easily.  Inlining helps for those Lisp's that can inline
functions, but you don't want to inline everything everywhere.

However, having said all that, I am certainly not opposed to new
functions in Lisp, for the reasons you give.  I'd actually encourage
that.  I think there are lots of functions for which we have no
numerical version.

I'm just not motivated to replace f2cl'ed Fortran code with new Lisp
code.  But this is just my opinion.  

Ray