> I have thought about always translating Maxima's input into Lisp and
> then running the Lisp code, instead of interpreting the internal s-expr
> representation.
You could of course do this, but if you want to preserve current
Maxima semantics, in the absence of additional information
(declarations), you'd end up with simple things like f(x) being
translated to
(let* ((xval (simplifya (if (boundp '$x) $x '$x))))
(simplifya (if (fboundp '$f) -- assuming all
functions are Lisp functions
($f xval)
(list '($f) xval) )))
(the current translator does not do this, instead calling the
interpreter when it has no declarations, thus leaving us back where we
started)
And of course simplifya is a non-trivial function -- that's where 1+1
becomes 2, for example. If you want to expand x+1 out, then you get
code like
(if (and (boundp '$x) (numberp $x)) (+ $x 1) (simplifya (list
'(mplus) 1 (if (boundp '$x) $x '$x))))
And this assumes that Maxima "+" on Lisp numbers is the same as Lisp
"+", which is true currently (except if you foolishly add weird
simplification rules to "+"); on the other hand, it is not true for,
say sqrt(-1), for which Lisp gives a approximate (floating) complex,
and Maxima gives an exact %i.
I suppose the advantage would be that you'd have a single code-base
for evaluation. On the other hand, do you really want to debug
running code that looks like that?
-s
PS Not to mention things like ev....