Accuracy and error analysis (was Re: [Maxima] primes)
Subject: Accuracy and error analysis (was Re: [Maxima] primes)
From: C Y
Date: Thu, 12 May 2005 15:33:55 -0700 (PDT)
--- Richard Fateman wrote:
> OK, pushing it some more..
>
> If every initial datum is associated with a different uncertainty,
> you can start with, say,
>
> x: 1+delta,
> y: 3+epsilon,
> z: 4 +mu,
>
> etc. where the Greek letters represent small but unknown
> quantities.
OK.
> You can compute some arithmetic function of x,y,z say x*y*z, and
> get an answer that looks like 12+ some_mess.
> Dealing with that mess becomes very expensive in time and space.
This is true, but for small numbers it might be necessary. For example,
in the case of x*y*z where those are measurements rather than internal
data, d would be:
x*y*z*sqrt((delta/x)^2+(epsilon/y)^2+(mu/z)^2)
and (x*y*z)+-d is the result of interest. However, if any of x, y or z
were floating point and small relative to the available ffprec, d would
be of the same magnitude as the error due to uncertainties of the
internal data. I can see how this would get messy, but since the only
alternative to doing it by computer is doing it by hand... Perhaps a
warning could be printed if the magnitude of d was nearing the ffprec
value? What would be "safe" for any sane calculation? two or three
orders of magnitude larger than ffprec? It would take a lot of
calculations (presumably) for the internal d to begin to accumulate
multiple orders of magnitude of uncertainty.
> So there is a tendency to say that we don't really need that
> some_mess, just a bound on it, given the bounds that |delta|,
> |epsilon|, |mu| are each in a particular interval.
> This reduces the size of some_mess, at the risk of loosening bounds.
>
> In particular, if you compute sqrt(x), you end up with something that
> is not as tight as it could be.
So the tradeoff is tightness vs. expense?
> Except that in your calculation, [d5] will be large. and in the
> next iteration it will be even larger. After a few iterations d5
> will be far larger than the value you are computing. And if you
> are computing long enough (say, using Mathematica's software
> floats), you end up with a number that might be very nearly right,
> but with an error bound that claims "no significant digits".
But d5 is discarded as soon as a new seed is chosen for the next
iteration, because that seed is assumed to be exact. d5 for an
individual calculation is the real uncertainty of that number within
the constraints of the system, and large or not it denotes the lower
limit for useful information Maxima can provide. Newton in each
iteration does in fact return a seed within a certain "range", but the
range is discarded and an exact value is chosen within the range,
because they should all be almost equally good. d5 serves only to
watch that the reported digits on the return from each iteration (which
will seed the next one) aren't varying only in a range below ffprec's
limits. In other words if Newton-Newton_Previous is "close" to d5 in
magnitude, stop and return because the limits of the system at its
current settings have been reached.
> This can solve the problem if you also discard the uncertainty when
> you change fprec. But then you are doing error analysis, not
> automatically. And this is what one has to do in Mathematica.
But d5 is discarded almost immediately in each iteration. You only
keep it around long enough to check its current value against the delta
of the last two Newton returns. fprec isn't changed automatically -
the only thing done automatically is recognizing that Newton has done
all the work it can within the current fprec settings, and having it
stop and return that number with a d of +-(.5 or 1)*10^(power of the
smallest digit that was unchanging for the last three or four
iterations). d5 is only used internally and lasts only for one
iteration - it is recalculated from scratch since the new "seed" is an
exact number (just not exactly the right one.) Surely this can be
automated? Probably on a per-function basis, but that is still doable.
Dynamically changing fprec is of less interest, unless the goal is to
return an answer with accuracy at the current fprec level, in which
case it is likely Newton would need to go to higher precision. Even
then though, couldn't it up fprec if it reaches the "dead end"
condition and recognizes that the power of the last steady number is
much higher than desired?
CY
__________________________________
Do you Yahoo!?
Make Yahoo! your home page
http://www.yahoo.com/r/hs