Multi-threaded Maxima



Hi Steve,

There are numerous CL implementations that now support multiple processors and the Scieneer CL has for over 8 years and it does 
allow multiple lisp processes access to the lisp heap concurrently.

GC is a challenge and limits scalability in the Scieneer CL implementation which uses a stop and copy GC running in a single thread, 
but applications will typically scale well to 8 processors.

Reworking Maxima to allow multiple threads to run separate operations using the Maxima code base may not require a big rewrite. 
This would not speed up a single operation, but would help in multi-threaded applications such as a web server in an educational 
application that wishes to call Maxima from multiple threads servicing separate sessions.  It would also be a good start to parallel 
extensions to Maxima.

Even as is, Maxima can be run in a separate system process to perform a computation and work in a larger parallel and possibly 
distributed computations.

Regards
Douglas Crosher


On 12/28/2010 12:10 PM, Steve Haflich wrote:
> Richard and Raymond and Stavros and probably nearly everyone else is
> commenting on a subject about which, I'll try to point out respectfully,
> they lack sufficient background to understand.  Symmetrical
> MultiProcessing (SMP) is not a simple, orderly subject like
> mathematics...
>
> Yes, most production Lisps these days have what is here being called
> "threads".  However, these implementations for the last couple decades
> have _not_ supported multiple processors, or at least, restrict access
> to the heap to a single thread and processor at a time.  This allows
> application code to be organized as separate threaded computations
> (which is much more straightforward and logical than trying to slice
> multiple unrelated computations onto a single thread) but it does _not_
> allow multiple processors to work simultaneously on multiple threads.
>
> There are some recent releases of true SMP Lisps from sbcl, Lispworks,
> and one is imminent for Allegro.  (Not sure about the others.)  There
> are different degrees of ambition in these implementations, but _all_
> require careful consideration and locking where parallel computations
> might come into contention.  Some things are locked automatically by the
> implementation -- others must be protected by the application code.
> Different implementations might have made different decisions about
> these, and therefore will have different overhead costs imposed by SMP
> processing.
>
> Common Lisp is defined with any number of global resources.  When the
> reader reads, for instance, it interns symbols.  Therefore having
> reading going on multiple SMP threads requires fastidious hash table
> implementation, or else fastidious locking protocols lest grave disorder
> result.  An example of grave disorder would be having two distinct
> similarl-named symbols interned in the same package.
>
> Even simple list operations like push and pop and property-list lookup
> are not SMP safe unless the application knows that only one thread has
> access to that list.
>
> GC is another issue.  I'm unaware of any serious Common Lisp
> implementation that has parallel gc, and most of the ways it could be
> implemented trade off gross inefficiency for minimalizing gc lockout
> intervals.  Gross inefficiency is a bad thing when one is ultimately
> concerned about speed.  (It might be acceptable if one is interested
> only in maximizing the percentage reported by a processor usage
> monitor.)
>
> Anyway, writing (not to mention debugging) a complex application that
> exploits SMP is hellishly difficult.  Converting an existing large,
> complex application (such as Maxima) to use SMP is hellishly impossible
> without doing a serious rewrite.
>
> There may be specialized computations that could be offloaded to
> separate processors (e.g. using fork or specialized array processors)
> but probably not if they require anything more than the coarsest
> synchonization.
>
> Sorry to be so glum about this, but (as some have already pointed out)
> the things Maxima really needs to attend to have nothing to do with
> parallel processing.
> _______________________________________________
> Maxima mailing list
> Maxima at math.utexas.edu
> http://www.math.utexas.edu/mailman/listinfo/maxima
>