evalation, simplification, etc.



On 2/24/2013 4:22 PM, Henry Baker wrote:
> I'm not aware of the innards of Maxima, so my comments may seem naive.
Probably your impression of what goes on is fairly accurate in general 
outline.
>
> Why does everything have to go through the _same_ evaluator, and then controlled by 1,000 different switches?
Most likely to provide the opportunity for synergistic effects, getting 
better results faster and using less memory.
Sometimes doing things sequentially produces huge intermediate 
expressions, blocking the production
of useful results.
>
> Wouldn't it make more sense to have multiple evaluators?   How hard can it be to implement an evaluator?  How many cases does it have to handle?
For each potential operator,  the evaluator would have to decide what to 
do.
mplus, mtimes, mexpt, $sin %sin $integrate %integrate $sum %sum etc .
This could be done in some object-oriented fashion  e.g. (defmethod 
baker-eval(( s plus-type) ....)
if each operator was actually a different type or  plus-type could be 
some like (eq(caar x) 'mplus)  ..

>
> Re simplification:
>
> I may have similar thoughts here, since the _goals_ of different simplifiers are dramatically different.  Depending upon the _goal_, factoring in one context could be considered simplification, while expansion in another context could be considered simplification.
There may even be more than one context in the same expression.
>
> I think that simplification & variable substitution should be considered as different processes.
Currently that is the case.
>   In traditional Lisp interpreters, both variable substitution is done along with expression reduction, but this is due to the _goal_ of producing a single value at the end.
Yep.

>
> I have been talking about both _type/value/range inferencing_ and compilation; these have entirely different goals.  So a "type/value/range inference macro" might be expanded during the inferencing stage, while a "compilation/efficiency-hack macro" might be expanded during a compilation stage.  E.g., a type inference macro might want to force case analysis, while it might be pointless and wasteful during execution to do case analysis.
I'm not sure I understand here.  The symbolic data objects (trees) can 
be evaluated in various
ways, e.g. if all variables are assigned real-interval values,  a 
result-range can often be computed.
That would be one way, but not the only way, of inferring a range. I 
supposed one can also do
a type inference by assigning types, though this doesn't work so well. 
e.g.  sqrt(real) is not necessarily
real, but can  be complex  (actually pure imaginary).  Usually Maxima 
lets the types fall where they
may, not attempting to predict them ahead of time. Perhaps this is what 
your last sentence means.

Compiling is something usually done on programs, not on symbolic data 
object "expressions", though
of course the distinction can be muddied.  Type inference in the 
user-language of Maxima, starting
from mode_declare stuff, can in principle be used to translate Maxima 
programs into Lisp and then
compile them.  This technology has probably not been exercised so much, 
though I think that
towards the end of Macsyma Inc  there was more of a tendency to write 
extensions of Macsyma
in the Macsyma user-language, and compile them.  This may have been more 
to disguise the
source code than speed it up.

Richard