Yes, exactly.
Rich
From: Stavros Macrakis
Sent: Monday, December 27, 2010 10:15 PM
To: smh at franz.com
Cc: Richard Hennessy ; maxima at math.utexas.edu ; Raymond Toy
Subject: Re: [Maxima] Multi-threaded Maxima
Steve,
I was not assuming that Lisp threads could use multiple processors, only that somehow we could *somehow* delegate work to multiple processors, possibly using multiple processes. For comparison, the statistical language R (which is also a dynamic language with GC etc.) has several multiprocessor packages, all using multiple processes. Despite the overhead of interprocess communication, they can be worthwhile even for fairly small calculations.
Of course, many numerical statistical calculations are easy, even trivial, to parallelize.
I am skeptical that there are many symbolic calculations that are worth parallelizing. Of course, many people use Maxima as a numerical system -- so in that case....
-s
PS I *am* aware of some of the relevant complexities, having worked on Ada runtime systems and having been peripherally involved in POSIX and Java threading.
On Mon, Dec 27, 2010 at 20:10, Steve Haflich <smh at franz.com> wrote:
Richard and Raymond and Stavros and probably nearly everyone else is
commenting on a subject about which, I'll try to point out respectfully,
they lack sufficient background to understand. Symmetrical
MultiProcessing (SMP) is not a simple, orderly subject like
mathematics...
Yes, most production Lisps these days have what is here being called
"threads". However, these implementations for the last couple decades
have _not_ supported multiple processors, or at least, restrict access
to the heap to a single thread and processor at a time. This allows
application code to be organized as separate threaded computations
(which is much more straightforward and logical than trying to slice
multiple unrelated computations onto a single thread) but it does _not_
allow multiple processors to work simultaneously on multiple threads.
There are some recent releases of true SMP Lisps from sbcl, Lispworks,
and one is imminent for Allegro. (Not sure about the others.) There
are different degrees of ambition in these implementations, but _all_
require careful consideration and locking where parallel computations
might come into contention. Some things are locked automatically by the
implementation -- others must be protected by the application code.
Different implementations might have made different decisions about
these, and therefore will have different overhead costs imposed by SMP
processing.
Common Lisp is defined with any number of global resources. When the
reader reads, for instance, it interns symbols. Therefore having
reading going on multiple SMP threads requires fastidious hash table
implementation, or else fastidious locking protocols lest grave disorder
result. An example of grave disorder would be having two distinct
similarl-named symbols interned in the same package.
Even simple list operations like push and pop and property-list lookup
are not SMP safe unless the application knows that only one thread has
access to that list.
GC is another issue. I'm unaware of any serious Common Lisp
implementation that has parallel gc, and most of the ways it could be
implemented trade off gross inefficiency for minimalizing gc lockout
intervals. Gross inefficiency is a bad thing when one is ultimately
concerned about speed. (It might be acceptable if one is interested
only in maximizing the percentage reported by a processor usage
monitor.)
Anyway, writing (not to mention debugging) a complex application that
exploits SMP is hellishly difficult. Converting an existing large,
complex application (such as Maxima) to use SMP is hellishly impossible
without doing a serious rewrite.
There may be specialized computations that could be offloaded to
separate processors (e.g. using fork or specialized array processors)
but probably not if they require anything more than the coarsest
synchonization.
Sorry to be so glum about this, but (as some have already pointed out)
the things Maxima really needs to attend to have nothing to do with
parallel processing.
_______________________________________________
Maxima mailing list
Maxima at math.utexas.edu
http://www.math.utexas.edu/mailman/listinfo/maxima