Writing a new module ?



Raymond Toy wrote:
> This was partly due to the translation, but mostly due to the fact that
> Fortran could assume things didn't overflow (or you didn't care), but
> Lisp couldn't.  So the bottle neck was in truncating floats to
> integers.  Fortran assumes the result fits in a fortran integer.  Lisp
> had to assume a bignum.

I think this is the main source of slowness in the translation to lisp of
lapack routines. However f2cl produces the type declarations automatically,
perhaps they are not correct. This may be fixable in f2cl.

>
> I don't know colnew could be made to be only 2x slower, but there is
> certainly room for improvement, but it certainly can't be as fast as the
> Fortran version, even if we called out directly to the Fortran code,
> simply because there are a lot of calls back into Maxima, which is slow.

Yes, half of the running time comes from callbacks to maxima, which one cannot
speed up (maybe one can compile the maxima formulae, but when i tried it was
buggy). Half of the running time comes from lapack and colnew routines
(translated from fortran to lisp). These are literally 100 times slower than
the Fortran versions, and this is terrible. One may hope to cut this to
10 times slower, this would be very nice, but at the end of the day would only
cut total execution time by 2, unfortunately.

The nice feature of scientific python is that one can easily convert the
equivalent of the maxima callbacks to code snippets in C that the system
compiles, links and runs at full speed. This is the reason why the benchmark i
have mentioned above shows much better performance using scipy with such
compilation hacks than with for example octave (it would be the same with
scilab or matlab) which always needs callbacks to its own language. This is
the reason why i advocated translating the compute intensive code to C or
fortran and compiling it.



-- 
Michel Talon