Accuracy and error analysis (was Re: [Maxima] primes)



Richard Fateman writes:
>ok, long explanation.
[snip]
>But look again at the formula. It takes advantage of
>the fact that each occurrence of "d" is the same "d",
[snip]
> In general, if maxima is doing
>interval arithmetic with uncertainties, there is not just
>one d.  Each d is different.  e.g. if you are talking about
>one unit in the last place, then d might be a shorthand for something
>like any number in [-1.0e-18, +1.0e-18]

Thanks for the explanation. If there are N occurrences of d, each in the
interval [d',d''], then we are basically choosing a point at random from
an N-dimensional cube. On the other hand, statistically the point is
likely to be close to the diagonal (t,t,t,...,t) of the N-cube. So, since
we have to accept the risk of being wrong a certain amount of the time to
get convergence, we might do so by assuming that the distance from the point
to the diagonal be at most epsilon, and we can also compute the probability
that this is the case. Can we get better control over the convergence if we
adopt this more general definition of interval, i.e. all points within a
certain distance of the diagonal, this just being the interval [d',d'']
itself when N=1?
-- 
Ignorantly,
Allan Adler 
* Disclaimer: I am a guest and *not* a member of the MIT CSAIL. My actions and
* comments do not reflect in any way on MIT. Also, I am nowhere near Boston.