Hennessy speculated:
> ...Maxima could be as much as 3 ? 5 times faster if it used all [cores]
>
> Fateman replied:
> Not if it is waiting for memory. ...a number of basic algorithms that
> could be made faster,... if memory bandwidth were not an issue.
>
I suppose it all depends on the memory usage patterns (in particular, the
cache footprint) of the particular application.
I ran some simple tests on my 2-core (E8400) machine:
randpol(deg,coef):=sum((random(coef*2)-coef)*x^i,i,0,deg)$
test(d, c) := block([p1, p2], p1 : randpol(d, c), p2 : randpol(d, c),
factor(rat(p1*p2)))$
for i thru 10 do test(100,10^80)$
in two Maxima processes simultaneously (started by hand) vs. in one Maxima
process sequentially.
With two processes, the elapsed times were 54.8+53.5 = 108.3 (with Windows
showing 100 % CPU usage) and with one process, 93.4 (50% CPU usage). I
tried other parameters -- different sized polynomials presumably require
different amounts of memory for the factor calculation -- and saw similar
results.
I also tried a simple rational matrix inversion:
for i thru 30 do genmatrix(lambda([i,j],random(100)-50),50,50)^^-1$
and got 20.8+21.3 = 42.1 vs. 39.4.
So the advantage in these cases are minimal, as Fateman predicts.
There may well be symbolic Maxima applications with a small-enough memory
footprint that memory access isn't the bottleneck, but there's no guarantee
of that.
-s