Well, I perhaps should also point out---
(1) Introducing intervals fundamentally changes the nature of symbolic
identity, which is at the base of simplification.
In particular, equality doesn't work.
If x=y, then 2*x=x+y except if x and y might be intervals, when this is
false.
While there are ways around this (e.g. see the paper I posted for
suggestions and references), they are messy, essentially requiring that each
interval carry around a history of how it was computed, or else become
sloppy at some point.
It is clear how one can modify the existing evaluator to introduce
intervals-- rewriting just about every component of it.
That is, one cannot just write a "simplifier for Interval( )" the way one
can write a "simplifier for cosine".
One must rewrite the "simplifier for =", "simplifier for +", etc. to handle
intervals. etc.
This could, of course, be done. But it would introduce bugs, it would
probably not have a handle on the clever things one can do to evaluate
interval expressions, e.g. polynomials, it would be quite slow, and of
rather slim interest.
So no one has done it. Indeed,
I would suggest that instead of doing interval arithmetic symbolically, one
could try to write programs of the form
Maximize (f(a,b,c), a in [r,s], b in [u,v], c in [w,z]), and then of
course Minimize would be the same but -f(a,b,c)..
That would give all the facility of intervals as usually implemented, and
much more, with symbols. Just write it.
(2) Just as people sometimes "want" Maxima to do something like numeric
eigenvalues, they sometimes "want" to do interval arithmetic. We could say,
oh, we already do eigenvalues (symbolic OR numeric), with the same program.
But this is not a real response to the eigenvalue request: we added numeric
programs because the symbolic eigenvalue program is good only for trivial
size problems or special cases. That is, we needed a link to a serious
numerical program. Same for intervals.
(3) Since the symbolic interval stuff (with explicit min, max, ... in the
endpoints) is highly unlikely to provide any insight into values, and the
prospect of simplifying it -- after producing it-- so remote [unless numbers
are inserted], that producing it in the first place is esssentially a fool's
errand. I think that Mathematica retreated from its earlier attempt to do
this kind of work on symbolic endpoints, and now essentially leaves things
in (what we would call) Noun forms. Except for the errors that Mathematica
commits because of equality problems.
As for inventing a new evaluator for each new kind of object, clearly not a
first choice -- however, using the same evaluator for objects that violate
basic axioms of mathematics that are built into the evaluator and work for
every other kind of object -- that's a stretch.
Macsyma's evaluator and simplifier are "data driven" (an early form of
object-oriented programming, before that concept had a name). That works for
introducing certain new operations, as long as they don't mess with +, *, =,
.
The Axiom system may be able to handle extensions more neatly, but I haven't
tried this.
Tellsimp rules on +, *, looking for interval terms, while possible, are
going to substantially slow down almost all computations. Not a great idea.
Recoding the lisp simplifiers for + and * for intervals could be faster, but
not by much.