Accuracy and error analysis (was Re: [Maxima] primes)
Subject: Accuracy and error analysis (was Re: [Maxima] primes)
From: Richard Fateman
Date: Mon, 02 May 2005 08:15:46 -0700
Various kinds of "automatic" error analysis have a long history.
You can find out about it by looking at "significance arithmetic"
and "interval arithmetic".
Unfortunately these approaches lead to problems where the over-estimate
of the uncertainty totally swamps the answer, when in fact the calculation
is such that the uncertainty decreases (e.g. a convergent iteration
converges, but the "automatically" computed uncertainty will increase).
Mathematica uses such a scheme, and after revising it repeatedly
version by version, as well as the interval arithmetic, has probably
settled down. There are many painful consequences of this,
for example, in computations where the
answer looks like 0 but could be anything at all.
Maple also has what it calls range arithmetic.
I think that real or complex (rectangular) interval arithmetic
is worth considering. I've had students write such stuff several
times, in lisp. It is not hard to do basic stuff. Integrating it
into all parts of Maxima would take some effort. You would not
be tagging variables, but values.
You could also do this for individual values in small computations
by adding to each uncertain number a (different) epsilon, and then
computing the resulting uncertainty as a function of epsilons. e.g.
instead of f: a/b you would compute fprime: (a+eps[1])/(b+eps[2]) ...
and then the error would be f-fprime. Find the feasible values of eps that
make it maximum.
RJF
C Y wrote:
> This actually raises an issue I've been wondering about for a while,
> and was highlighted by the observed difference in the float conversion
> of a units expression vs. it's unconverted form. It's unavoidable that
> numerical calculations have some uncertainty associated with them.
> There are also non-numerical related uncertainties in non-integer
> conversions between units and the values of physical constants. One of
> the longer term ideas I have for units and physics pacakges in general
> is a way to do intelligent error propagation, incorporating user
> defined uncertainties and if possible the uncertainty in Maxima's
> calculations as well. The problem is I'm not quite sure how to
> approach tracking the uncertainty introduced into an expression by,
> say, the float(%) command. Is there some systematic way this can be
> handled (e.g. if maxima's precision limits are set to x, then each
> numerical conversion will increase the uncertainty of the number
> converted by y)? I guess the ideal thing would be for all variables in
> Maxima to be treated as objects, with one of the available pieces of
> information being an uncertainty that reflects the history of the value
> within Maxima, but I have no idea how practical that is. Numbers could
> be handled by rounding them at the last significant figure. Something
> l