parallel cell executing and wxmaxima



NO REPLY EXPECTED

On Sat, Jun 9, 2012 at 2:34 PM, Richard Hennessy
<rich.hennessy at verizon.net>wrote:

> As a start, I suggested just writing matrix multiplication as a parallel
> code process.  I thought that would not be hard.  It is true, it would not
> be.  I have written code in C++ that is parallel and it is not too hard.
> You could define a new function called parmatrixmul(A,B).  You could make
> it so it only works for matrices of numbers.  I quick scan of the entries
> before starting the process would be all that is needed.  Of course the
> lisp issues are hard for me to determine since I don't know lisp.
>
> Another idea would be matrix inversion.  I guess it would be too hard to
> make maxima fully parallel and since the programmers are all volunteers no
> one will want the job and be willing to take it on.  I am in school right
> now and I have a lot of work on my plate, so I am too busy or I would do it
> myself in C++.  Then you could have the code, but since I am not a
> mathematician I might not do it the best way.  I doubt anyone would want my
> code so they could implement it in parallel anyway.  No one wants/wanted my
> pw.mac code.  It is not part of the distribution, so I conclude that any
> code I write is not likely to be accepted and incorporated into Maxima.
>
> Rich
>
>
>
> -----Original Message----- From: Steve Haflich
> Sent: Saturday, June 09, 2012 2:09 PM
> To: Richard Hennessy
> Cc: Richard Fateman ; maxima at math.utexas.edu
>
> Subject: Re: [Maxima] parallel cell executing and wxmaxima
>
> Pushback happens to ideas where one person wants others to do the work.
> (If you don't understand why, please drop by my house this afternoon --
> the lawn needs mowing.)
>
> But there are other good reasons for pushback in this case.  The notion
> of converting Maxima to Symmetrical Multiprocessing is hugely difficult.
> For current purposes, let's define SMP as an implementation where
> multiple cores can access and mutate the heap in parallel.
>
> - First, SFAIC only three candidate CL implementations support real SMP:
>  SBCL, CCL, and Allegro.  Their APIs are somewhat different, and the
>  operations which are SMP safe, and which other datastructures must be
>  protected by locking.
>
> - It is hellishly difficult to convert an existing body of code to SMP.
>  It is merely extremely difficult to write a body SMP code from
>  scratch, but that's another matter.
>
> - Debugging bugs and failures under SMP is extremely difficult and often
>  requires special expertise that has little to do wuith mathematics.
>
> - In general, speed gain is often not great (except for problems
>  specially amenable to partitioning) and sometimes is actually
>  negative, since SMP-safe complers often have to emit code that is
>  slower than non-SMP code.
>
> Converting Maxima to run under SMP would be a _huge_ project with
> perpetual cost.  The Maxima developers simply have more urgent tasks.
>
> Now, this isn't to reject the idea that certain specialized calculations
> could not be parallelized.  (Several approaches have already been
> mentioned.)  One would be to run several independent Maxima executions,
> or even non-Lisp computational processes, all communicating via sockets
> or (mapped) files.  This is tricky to implement, and laborious to make
> portable across multiple platforms, but much more practical.
>
> Another technique, perhaps applicable to specific things like matrix
> multiplication, would be to code the parallelization in foreign code.
> Again, details of foreign function interface and internal array
> representation between patforms would require extensive code
> customization to make such code work on all platforms.
> ______________________________**_________________
> Maxima mailing list
> Maxima at math.utexas.edu
> http://www.math.utexas.edu/**mailman/listinfo/maxima<http://www.math.utexas.edu/mailman/listinfo/maxima>;
>