looking for bottlenecks, was: property lisp hack



On Fri, Aug 10, 2012 at 3:11 PM, Robert Dodier <robert.dodier at gmail.com>wrote:

> ...If we are looking for places to speed up Maxima,


That's a whole new discussion....


> the first place I'd look is the assume database. I seem to recall that the
> database is consulted very often (by ALIKE1? I don't remember for sure)


No.  The assume/is system is never called by ALIKE1, as can be verified by
looking at a dozen lines of code.  Perhaps you are confusing it with the
fact that LIKE -- which is used for is(a=b) -- calls ALIKE1.

and each consultation causes many comparisons, including some that
> seem nonsensical -- stuff like "is 1b0 greater than 1d0" as I recall.
> We could try to reduce the number of consultations and also try to
> reduce the time spent on each one.
>

It would be nice to speed up assume/is, like anything else, but do you have
any evidence that this slows down much of Maxima or even that it is called
too often?  My impression is that Maxima in fact does not use the assume
database *enough.*  For example, assume(equal(pi,%pi))$ sin(pi) => sin(pi).
 There are no doubt cases where it calls it too often as well.

In any case, it seems to me like a *much* higher priority to make assume/is
more complete and powerful than to make it faster.  It misses some
embarrassingly elementary cases, e.g. assume(onep>1)$ is(2^onep>2) =>
unknown or equivalently sign(2^(x^2)-1) => pnz.


> Second place to look is ALIKE1 itself, since it's called so often.
>

Indeed, ALIKE1 is called all over the place, which is not a surprise, as it
is the Maxima equivalent of EQUAL.  But absent some radical change like
RJF's expression-hashing system, I'm not sure how much it could be speeded
up. I'm also not sure how many of the calls to it are non-trivial (trivial
= one or both arguments are atoms, the fast case) in real usage. A minor
change that would speed it up a bit would be to encode a[x] as something
like ((subscript) $a $x) instead of (($a array) $x), to avoid
special-casing arrays, but that would require reviewing and modifying
almost all code that touches arrays.  Another minor (but behavior-changing)
change would be to stop comparing lisp-arrays for value -- this is anyway
strange behavior since no other arrays are compared for value; that saves
one call to ARRAYP. Neither of these changes would gain much, and I don't
see any other low-hanging fruit.

Third place is the factoring code -- if I'm not mistaken, the
> long execution time of some of the test scripts is due to trying to factor
> large expressions.
>

Not sure I follow your reasoning here.  Even if the test scripts spend a
lot of time on factoring, what does that demonstrate? I can easily slow
down the test scripts in *any* particular area by giving them particularly
large problems e.g. taylor(exp(sin(x)),x,0,1000).  The fact that some large
expressions are being tested for factor (presumably because they have
previously had bugs) doesn't tell us much about the *relative* speed/efficiency
of factoring vs. other operations.  In addition, do you have any reason to
believe that (a) slow factoring is a problem in actual usage or (b) we
aren't using appropriate algorithms or (c) the algorithms aren't
implemented well?

             -s