Subject: looking for bottlenecks, was: property lisp hack
From: Richard Fateman
Date: Fri, 10 Aug 2012 15:43:46 -0700
On 8/10/2012 2:20 PM, Stavros Macrakis wrote:
> On Fri, Aug 10, 2012 at 3:11 PM, Robert Dodier
> <robert.dodier at gmail.com <mailto:robert.dodier at gmail.com>> wrote:
>
> ...If we are looking for places to speed up Maxima,
>
>
> That's a whole new discussion....
>
> the first place I'd look is the assume database. I seem to recall
> that the database is consulted very often (by ALIKE1? I don't
> remember for sure)
>
>
> No. The assume/is system is never called by ALIKE1, as can be
> verified by looking at a dozen lines of code. Perhaps you are
> confusing it with the fact that LIKE -- which is used for is(a=b) --
> calls ALIKE1.
>
> and each consultation causes many comparisons, including some that
> seem nonsensical -- stuff like "is 1b0 greater than 1d0" as I
> recall. We could try to reduce the number of consultations and
> also try to reduce the time spent on each one.
>
>
> It would be nice to speed up assume/is, like anything else, but do you
> have any evidence that this slows down much of Maxima or even that it
> is called too often? My impression is that Maxima in fact does not use
> the assume database /enough./ For example, assume(equal(pi,%pi))$
> sin(pi) => sin(pi). There are no doubt cases where it calls it too
> often as well.
>
> In any case, it seems to me like a /much/ higher priority to make
> assume/is more complete and powerful than to make it faster. It
> misses some embarrassingly elementary cases, e.g. assume(onep>1)$
> is(2^onep>2) => unknown or equivalently sign(2^(x^2)-1) => pnz.
>
> Second place to look is ALIKE1 itself, since it's called so often.
>
>
> Indeed, ALIKE1 is called all over the place, which is not a surprise,
> as it is the Maxima equivalent of EQUAL. But absent some radical
> change like RJF's expression-hashing system, I'm not sure how much it
> could be speeded up. I'm also not sure how many of the calls to it are
> non-trivial (trivial = one or both arguments are atoms, the fast case)
> in real usage. A minor change that would speed it up a bit would be to
> encode a[x] as something like ((subscript) $a $x) instead of (($a
> array) $x), to avoid special-casing arrays, but that would require
> reviewing and modifying almost all code that touches arrays. Another
> minor (but behavior-changing) change would be to stop comparing
> lisp-arrays for value -- this is anyway strange behavior since no
> other arrays are compared for value; that saves one call to ARRAYP.
> Neither of these changes would gain much, and I don't see any other
> low-hanging fruit.
In fact, the tools available for profiling code in (some) lisps are
quite good. I suspect that the tools
available 20 or 30 years ago in (some) lisps are superior to the tools
available even today in
some C-based systems. The program alike1 was the subject of much
study(2 weeks of tweaking) precisely for the reasons
given by Stavros. Decades ago, and as I recall, mostly by me. Which is
why I'm a little squeamish
about someone saying, oh, I looked at alike1 for an hour, and since I
can change the repository,
I'll just put in my "fix", so long as it doesn't break the test suite.
Even more squeamish about someone
doing it as quietly as one can within git..
I may be mistaken but it seems to me that the current code base no
longer includes the code needed to make
a Maxima system entirely in Lisp (without plot or front end). Not too
long ago I could build a system without
configure or make. There are lisp tools that could be used, asdf,
defsystem, quicklisp that are independent of
operating system etc. Can't we use them?
>
> Third place is the factoring code -- if I'm not mistaken, the
> long execution time of some of the test scripts is due to trying
> to factor
> large expressions.
>
>
> Not sure I follow your reasoning here. Even if the test scripts spend
> a lot of time on factoring, what does that demonstrate? I can easily
> slow down the test scripts in /any/ particular area by giving them
> particularly large problems e.g. taylor(exp(sin(x)),x,0,1000). The
> fact that some large expressions are being tested for factor
> (presumably because they have previously had bugs) doesn't tell us
> much about the /relative/ speed/efficiency of factoring vs. other
> operations. In addition, do you have any reason to believe that (a)
> slow factoring is a problem in actual usage or (b) we aren't using
> appropriate algorithms or (c) the algorithms aren't implemented well?
I agree with Stavros here, too. If factoring of integers or polynomials
is truly a bottleneck for someone, there
are programs that do exactly and only this; they are generally embedded
in kind of weak computer algebra
contexts, but they are there. People have spent many years developing
this stuff. It would be, in my opinion,
a terrible waste to duplicate this effort. It would be arguably a waste
even to re-program this in lisp.
A Maxima within a lisp with a foreign function interface could access
these
programs if it were truly necessary to go back-and-forth from these
procedures to a Maxima
system calculation. For example, I have linked MPFR (read about it) to
Lisp, and it could sort of replace bigfloats
with the work of some people who have spent too much of their lives
doing this... It is also possible
to supply quad-double arithmetic, interval arithmetic, and other
features this way.
If there are clever mathematically sophisticated skilled programmers
with time on their hands,
there are lots of things to do that would be more useful. If someone is
looking for
a reasonably-well specified task, there are many items that could be
inspired by recent
additions to Mathematica, for a start.
One of the more preposterous claims made by someone (in marketing) at
Macsyma Inc was that Macsyma went out of business
because the factoring algorithm was too slow.
RJF