Hello *,
Physicists at the University of Karlsruhe work on large-scale calculations
of multiloop
Feynman diagrams. The main tool is Form, which allows one to work with
expressions whose size is limited only by the size of the hard disk. But
this program is not sophisticated; it cannot, for example, calculate
polynomial gcd's. Sometimes it is necessary. So, recently, a hybrid
approach was used: some sub-problems are sent (via a pipe) to another
system to add rational expressions with gcd cancellation, and results are
read back (via a pipe). The program Fermat was used for this. But there
are some problems with Fermat, and I proposed to use something more
standard. Maxima is one possibility being considered.
So, I was given a test file with typical example problems to do some
benchmarking. It contains about 180000 separate calculations, and is about
1 Gb in size. The calculations are absolutely independent; each of them is
an addition of a few rational expressions, and the task is to bring them
to a common denominator and to cancel the gcd.
I have several questions to the gurus:
1. How to disable the history mechanism which assigns stuff to %i and
%o ? No linear decrease of available memory is tolerable.
2. Which gcd algorithm to use? Some of them contain bugs.
3. The platforms which will be used are: Pentium-4, AMD Opteron, Itanium2
(32-processor SGI Altix server), SUSE Linux everywhere. What lisps can run
on all of them? What about 64-bit addressing - possible? desirable?
4. Are there upper limits on the memory which maxima on each of these
lisps can use? How to increase them?
Best regards,
Andrey