"Stavros Macrakis" <macrakis at alum.mit.edu> writes:
...
> Which, if inf is to be treated as a number on which arithmetic can be
> performed, is reasonable.
>
> I am not sure what that means. You can't adjoin INF to the "numbers on which
> arithmetic can be performed" without either breaking some of the
> field axioms
True, but this is still done in, for example, real analysis.
> The Maxima objects INF, UND, etc. have in the past not participated
> in arithmetic at all -- they have been outside the arithmetic
> system, and serve to indicate objects that can't be represented in
> arithemetic. Once you have an INF in the system, you are committing
> yourself to losing information (I think we all agree that 1/(2*inf)
> = 1/ (3*inf) = 0, and that 1/(1/2*inf)) is not 2 (or 3), but UND.)
Then perhaps inf should stay out of arithmetic.
> If you start doing things like 0*inf=0 and 1^inf=1, you are arbitrarily adding back
> information into the system, essentially guessing what the answer should be. I would
> rather be failsafe, and acknowledge that these are not well-defined, just like 0^0
> (scary to disagree with Knuth, but...).
I think 0^0=1 is fairly standard, even beyond Knuth.
Are there any computer mathematical systems where 0^0 is not set to 1?
> There is a simple way out, I suppose. You don't ever give normal finite results when
> the inputs involve INF; you give some special object, e.g. 1/inf => zeroa (terrible
> name, but that's what Limit currently uses), and 1+zeroa stays 1+zeroa. 1/infinity
> has to be zeroC or something. Then, presumably, you make sure that (1+zeroa)^inf
> either doesn't simplify, or simplifies to UND. Are we willing to deal with that
> complexity?
Good question. It isn't pretty, but it doesn't lose as much
information as otherwise.
Jay