I agree that if we are using fixnums for things that could grow to be too
big that we must catch that error. At least for allegro CL, I think you can
declare a balance of safety vs speed optimization so that either
(a) the result of fixnum+fixnum -->fixnum is not checked for overflow
(maximum optimization level)
Or
(b) the result of fixnum+fixnum IS checked.
The problem with unused generality that costs time or space is that one is
stuck trying to explain why Maxima timing is slower than contemporary
systems like Maple and Mathematica. You could argue that Maxima is more
general, but if that generality is never apparent in any computation that
you could force to complete anyway, only the slowness shows.
RJF
_____
From: maxima-bounces at math.utexas.edu [mailto:maxima-bounces at math.utexas.edu]
On Behalf Of Stavros Macrakis
Sent: Tuesday, February 13, 2007 5:59 PM
To: Andreas Eder
Cc: Maxima at math.utexas.edu; Raymond Toy
Subject: Re: [Maxima] gcd problem
On 2/13/07, Andreas Eder <aeder at arcor.de> wrote:
> I think we should just get rid of the f+, f-, f* macros/functions.
> Well, actually redefine them so they don't assert that the operands
> and results are fixnums.
That is just what I am thinking, and in the course of my code cleanup
action I'm also replacing all these fixnum specific macros by the
generic operation.
That may or may not be a good idea, but it is much more than "cleanup". I
would encourage you to keep separate "cleanup" edits from those which change
the functionality of the code and lose information about the original
programmer's intent.
As for the argument that modern machines are so much faster that efficiency
doesn't matter, that is only partially true. Here I am today with a 1Ghz
Athlon with 500MB of RAM, and for some applications (not Maxima yet,
thankfully), it seems to be less responsive than a 10-person timeshared
0.5Mhz PDP-10 with 5MB of RAM. Why? A lot of sloppy programming (in the
Windows OS notably) which is probably excused by "machines will get faster".
The old underlying systems (Lisp interpreter, Lisp compiler, operating
system) were written with much more attention to efficiency, and some of the
new Maxima code I've seen is not especially careful about it.... It would be
interesting to know how much faster Maxima runs on real problems than old
versions of Macsyma; I suspect that given the locality-of-access patterns of
Lisp in general and Maxima in particular, that main-memory (not cache)
bandwidth is the limiting factor, not processor speed. I think there are
old published papers with timings of calculations.
In the particular case of CRE exponents, I am not sure. There are unlikely
to be many practical applications of exponents > 2^31, and certainly
Maxima's algorithms haven't been tuned for those cases. On the other hand,
*silent* failure is almost always unacceptable. *IF* we are going to use
integer arithmetic, we have to err out in the case of overflow. I don't
think there are mechanisms for doing that in all (any?) of the Lisps we run
in without using generic arithmetic.
-s