> > I tried one of the tests proposed by Prof. Fateman in 'Speeding up
> > Lisp-based Symbolic Mahematics' - computing sum(a[i],i,1,N) for
> > different values of N. The timings on my PII-233 PC are ([N,time])
...
> > I think it's still O(n^2)
I have not read RJF's paper, but there are certainly various ways of
improving the asymptotic performance on cases like this. One way would
be to use some other representation for nary (sorted) function arguments
(in this case, the function is "+"), such as a heap. Another way would
defer sorting somehow, so that all N elements could be sorted at once.
Both of these would require non-local changes to the Maxima system. For
example, functions outside the simplifier can assume that nary argument
lists start with numbers, then have constants, and only then
subexpressions involving variables.
There is another way of improving real-life performance, though it
doesn't actually change the asymptotic performance. I suspect that the
bulk of the time in sorting is spent in the comparison function
("great", the internal version of Orderlessp) and not in list
manipulation, so if we can reduce the number of comparisons, that will
help.
The current algorithm for merging (a1+...+am)+(b1+...+bn) uses O(m+n)
comparisons when the two lists are the same length. However, when
unequal length lists are merged, it takes something like
m+n-max(m,n)/min(m,n) comparisons where it could take many fewer: if
m=1, then only log2(n) instead of n. It will still require O(n) list
operations, though. Barton Willis and I have written a log2(n)
*insertion* routine for the set package; on the other hand, we haven't
written a binary merge (though that's certainly possible).
Is performance in cases like this a real issue for users?
-s