> -----Original Message-----
> From: maxima-bounces at math.utexas.edu
> [mailto:maxima-bounces at math.utexas.edu] On Behalf Of Andreas Eder
>
> Well, depending on the lisp implementation replacing memq gives me
> between 7% and 10% speedup when running the test-suite. I think
> this isn't so little.
>
That suggests that member or memq is doing very little since the time to
call it dominates its computation.
If the second argument is a constant list, which may be frequent, maybe it
should be expanded into something else.
e.g. (memq x '(mplus mtimes)) should not be (member x '(mplus mtimes)
:test #'eq) but should be
(or (eq x 'mplus)(eq x 'mtimes))
or perhaps
(case x ((mplus mtimes) t))
In Allegro CL, these result in about 12 instructions, no function calls.
I would have expected Bill Schelter to have written Maclisp's memq as a
macro definition which expands to member, in which case the run-time
overhead would be zero. If Bill did not do that, perhaps someone else will
do it, thereby eliminating any run-time efficiency discrepancy, as well as
the discussion of it.
It is also possible for someone to see the macroexpansion result by, for
example,
(macroexpand '(memq a b)) --> (member a b :test #'eq) or whatever it is.
> That is all pretty good common lisp stuff, but I agree it can be
> quite unclear about what is meant and I *do* try to change it to
> clearer code over the time, e.g. look at the changes to simp.lisp
> where I just changed all the lambda-bindings to let-bindings. One
> of the big things on my agenda is to eventually get rid of all the
> maclisp argument calling stuff like (arg i) (setarg ..) etc.
I suggest that this embroidery around the edges should not be a big thing on
your agenda since it is a potential source of bugs, and the payoff is
non-existent (unless you do this in the context of making actual
improvements in functionality).
The idea that one should have to write in CL in some more-or-less raw
fashion is pretty much the antithesis of how, I think, large programs in CL
should be written. I think that CL is the low-level language on top of
which a higher-level application language is designed, by means of function
definitions, data abstraction (including CLOS), macro definitions. This
high-level language in turn may be the low-level language for yet another
layer...
What is nice about CL is the fact that it supports such language building so
nicely.
by the way,
It is easy to come up with an improvement that makes (say) SBCL run faster,
but slows down GCL, or CLISP, or vice versa. And it is possible to come up
with an improvement that makes EVERY system run slower. Attacking an
inefficiency without knowing first (by profiling) what that inefficiency is,
is a really bad idea. If you are attacking a 5% slowdown of some sort, you
might consider doing something else with your time.
Regarding the recent spate of comments about "great" -- there are techniques
that, for large enough examples, replaces the repeated use of "great" with
other techniques, and makes an time difference of 10X or 100X or more.
If such changes can be made to work on small enough examples too, we have a
real winner. Diddling with the definition of "great" itself is unlikely to
improve performance by much.
RJF
>
>
> Andreas
> _______________________________________________
> Maxima mailing list
> Maxima at math.utexas.edu
> http://www.math.utexas.edu/mailman/listinfo/maxima
>