More on memq



If you replace memq by the appropriate macro expansion, my guess is it would
be FASTER than member by 
the same or a larger factor.  

Presumably what you have observed is the call overhead for memq to call
member.
In a macro expansion, that call overhead would disappear.  And the
open-coding would provide
additional speedups  (though if sbcl/cmucl is clever enough to do
open-coding, then
there would not be a change in that.)  I think that most applications of
memq
are searching in a list of 2 or 3 length, and often a CONSTANT list.

It seems to me that you should be able to roll back your changes and test a
memq macro,
with explicit open-coding of testing for short lists, as a courtesy.
 

> -----Original Message-----
> From: Andreas Eder [mailto:aeder at arcor.de] 
> Sent: Saturday, April 12, 2008 8:36 AM
> To: fateman at EECS.Berkeley.EDU
> Cc: maxima at math.utexas.edu
> Subject: Re: [Maxima] More on memq
> 
> Richard wrote:
> 
> >Andreas then asks "what would be gained by changing back?"
> >
> >The answer is evident:  it might be faster with some compilers.
> 
> Well, the fact is that by replacing memq it got faster by between
> 7% and 10% on sbcl, cmucl and clisp.
> I cannot speak for any other compilers.
> 
> Andreas
>