I think the reasons for and against using OS multiprocessing are stated in
various places, and so you could easily find arguments for "why do it
this way.".
But
Why limit yourself to the number of cores you have on your computer?
Cloud computing provides any number of processors.
I don't know how to do cloud computing.
While you could presumably run the same program on each processor, it
would not be necessary to run a parser/display/front end on
them all.
Yes I agree, the new Maxima process should launch with a hidden window.
Google's code blog has this about lisp
http://googlecode.blogspot.com/2010/05/better-performance-in-app-engine-with.html
As for whether this will get you your answer faster, overall, it depends on
the relative costs of computation versus communication.
There have been conferences on parallel symbolic computation.
For the latest, see http://pasco2010.imag.fr/
There is a huge amount of material on parallelism in general, including
parlab at UC Berkeley.
http://parlab.eecs.berkeley.edu/
I suggest you do some reading on these topics before investing a lot of
time writing a program from scratch.
I am personally skeptical about parallel computing making a difference
in the way I approach Maxima, at the moment. If I have a small problem
I get the answer right away.
It used to be that if I had a large problem, it filled up memory and died.
Splitting that up would probably not help. But memory is much larger,
but comparatively slower, so who knows, now. But then larger problems
seem to be fairly rare. The bottleneck, so to speak, is the intellectual
one of not knowing how to do something, not that it is executing too
slowly.
"It used to be that if I had a large problem, it filled up memory and died."
This still happens with pw.mac. A flaw in pw.mac is that it represents all
piecewise functions in one big expression in terms of the signum() function.
For convolutions of piecewise functions it would be better to use another
way since the resulting expression is immense if you try to do repeated
convolutions with pwint(). Maybe the FFT method is better. I have to try
that. Convolutions of multiple piecewise functions is a typical high memory
usage symbolic computation problem when the number of piecewise defined
functions being convoluted is large, like around n > 15. I also cannot see
a way to do this in pieces yet (symbolically) but I am still thinking about
doing that somehow. There is a lot of parallelism in this computation.
Maybe the functions could be expressed using arrays instead of signum()'s.
If you look at the parlab site, you will see that th current
conventional way to win with parallelism is to find the subproblems
of your task that exhibit obvious parallelism, and hope that they
are already programmed by someone else, and use them.
I noticed there is an enhancement to lapack for matrix multiplication that
uses multiple cores in one of your links. Maybe that would be useful, I'm
not sure.
Rich