Accuracy and error analysis (was Re: [Maxima] primes)
Subject: Accuracy and error analysis (was Re: [Maxima] primes)
From: Richard Fateman
Date: Thu, 12 May 2005 13:18:46 -0700
OK, pushing it some more..
If every initial datum is associated with a different uncertainty,
you can start with, say,
x: 1+delta,
y: 3+epsilon,
z: 4 +mu,
etc. where the Greek letters represent small but unknown quantities.
You can compute some arithmetic function of x,y,z say x*y*z, and get
an answer that looks like 12+ some_mess.
Dealing with that mess becomes very expensive in time and space.
So there is a tendency to say that we don't really need that some_mess,
just a bound on it, given the bounds that |delta|, |epsilon|, |mu|
are each in a particular interval.
This reduces the size of some_mess, at the risk of loosening bounds.
In particular, if you compute sqrt(x), you end up with something that
is not as tight as it could be.
C Y wrote:
>--- Richard Fateman wrote:
>
>
>>ok, long explanation.
>>Consider the maxima definition
>>
>>f[i,n]:=f[i-1,n]-(f[i-1,n]^2-n)/(2*f[i-1,n]);
>>
>>This is a newton iteration for square root of n
>>Compute the square root of 1+d this way..
>>f[0,3]:1;
>>f[9,3]; --> 9 iterations towards sqrt(3).
>>
>>
>
>OK.
>
>
>
>>next.. sqrt(1+d)
>>
>>f[0,1+d]:1 /* initial guess */
>>
>>f[2,1+d]; /*after 2 iterations */
>>
>>(d/2) - (( - d + ((d/2) + 1)^2 - 1)/(2 * ((d/2) + 1))) + 1
>>
>>which you can understand by
>>
>>taylor(%,d,0,3) --> 1 +d/2 -d^2/8 + ....
>>
>>so it is getting what I said it should get.
>>
>>
>
>OK.
>
>
>
>>But look again at the formula. It takes advantage of
>>the fact that each occurrence of "d" is the same "d", and
>>thus d-d/2 is d/2. and other formulas might
>>make use of simplifications like (1-d)*(1+d) = 1-d^2.
>>
>>
>
>But d as an error shouldn't be used with the Newton iteration - that
>does not accurately represent the uncertainty involved. I'm afraid now
>I'm even more confused, because I don't see how you can Newton iterate
>towards an inexact number. There is a d associated with each operation
>in the Newton equation resulting in an inexact answer:
>
>f[i-1,n]^2[d1]
>(f[i-1,n]^2-n)[d2]
>(2*f[i-1,n])[d3]
>((f[i-1,n]^2-n)/(2*f[i-1,n]))[d4]
>(f[i-1,n]-(f[i-1,n]^2-n)/(2*f[i-1,n]))[d5]
>
>but this doesn't impact the final answer as such since the next
>iteration will use the exact return as the next seed. It could equally
>well use either return+dtotal or return-dtotal, since either one will
>also iterate toward the same final answer.
>
Except that in your calculation, [d5] will be large. and in the next
iteration it will be even larger.
After a few iterations d5 will be far larger than the value you are
computing. And if you
are computing long enough (say, using Mathematica's software floats),
you end up with a number
that might be very nearly right, but with an error bound that claims "no
significant digits".
> It does, however matter
>when the propagated error of one Newton calculation (d5) is larger than
>the difference between your previous value and your current one. At
>that point, Maxima has reached the limit of its ability to accurately
>calculate the sqrt with this method and current ffprec settings.
>Presumably with more precision in floating point, each individual d
>would be smaller, resulting in a smaller d5 and the possibility of more
>precise Newton calculations.
>
This can solve the problem if you also discard the uncertainty when you
change fprec.
But then you are doing error analysis, not automatically. And this is
what one
has to do in Mathematica.
RJF