Accuracy and error analysis (was Re: [Maxima] primes)
Subject: Accuracy and error analysis (was Re: [Maxima] primes)
From: Robert Dodier
Date: Thu, 12 May 2005 22:13:25 -0700 (PDT)
--- Richard Fateman wrote:
> If every initial datum is associated with a different uncertainty,
> you can start with, say,
>
> x: 1+delta,
> y: 3+epsilon,
> z: 4 +mu,
>
> etc. where the Greek letters represent small but unknown
> quantities.
>
> You can compute some arithmetic function of x,y,z say x*y*z, and
> get an answer that looks like 12+ some_mess.
> Dealing with that mess becomes very expensive in time and space.
>
> So there is a tendency to say that we don't really need that
> some_mess, just a bound on it, given the bounds that |delta|,
> |epsilon|, |mu| are each in a particular interval.
> This reduces the size of some_mess, at the risk of loosening bounds.
This is a pretty fair summary. The broad practical interest of
this problem starts when delta, epsilon, and mu are an
appreciable fraction of the quantities they're attached to.
A statistical approach is motivated by the notion that the
messy part is interesting and useful, and therefore worth
the effort despite the expense and difficulty. In particular,
one must work with the messy part to get accurate tail
probabilities.
Microscopic error analysis is a practical problem for
people inventing new numerical methods; not many people
do this. On the other hand, every time one works with
observations of natural phenomena, machinery, or (above
all) humanity, uncertainties at least several orders of
magnitude larger than the floating point epsilon come
into play. My advice is that we focus our efforts on
macroscopic uncertainty analysis (to the extent that we
spend any time on this at all).
For what it's worth,
Robert Dodier
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com