Accuracy and error analysis (was Re: [Maxima] primes)



--- Richard Fateman  wrote:
> Allan Adler wrote:
> > Maybe an implementation of interval arithmetic would do the job,
> > with paranoid rounding to compensate for truncation errors. Then
> > evaluate numerical functions on interval inputs and give interval
> > answers.
> 
> This is possible and could be done. For example it is done in
> Mathematica's software floats and also Interval. And in Maple with
> range arithmetic.

I guess I don't understand the issues properly, or I'm thinking about
them in the wrong way.

My thinking:

a)  For each physical quantity, there exists a measurement uncertainty.
b)  For each conversion between units, there is error introduced in all
cases where there is not an exact factor of conversion.
c)  When calculations are done using these quantities, there is a
resultant uncertainty which can also (usually) be calculated, and as a
result the answer has a definite number of significant figures, and any
beyond those are meaningless.
d)  It would be desirable to have Maxima automatically handle
significant figure issues (which are a Major Pain) whenever possible.
e)  The problem (aside from implimenting the mechanisms to propagate
unit errors) is that Maxima's own calculations using numbers as opposed
to symbols also introduce uncertainty.  My guess is that this would not
constitute a significant contribution to the overall uncertainty for
any normal calculation, but if dealing with very large or small numbers
and uncertainties this assumption will, sooner or later, break down.  A
possible partial workaround is to examine the numbers involved inthe
calculation and up ffprec and any other relevant settings accordingly,
but without handling intelligently the internal Maxima calculations
even this is no guarantee.  (Which means it will work most of the time
but will not catch corner cases.)

It might be worthwhile to impliment this anyway, since a) it will work
in cases which might be mathematically trivial but will be very useful
practically speaking and b) it can improve as the underlying system's
ability to support it improves, if designed correctly.

> It is good for some things but not for important things.
> A convergent Newton iteration, viewed crudely via intervals,
> never converges, but diverges. The necessity for human intervention
> crops up pretty often, unless you are happy with very pessimistic
> results.  I think baysian analysis might be slightly less 
> pessimistic at the cost of sometimes being wrong.

Which raises the question of how the human can know to intervene when
the computer cannot spot it.  Just a large set of heuristics in the
human brain checking for "reasonableness"?  I would (again in
ignorance) think that a proper definition of error propagation would
recognize the decrease of error when a Newton method is used, just as
it would also recognize that the same measurement taken multiple times
results in an average with an error less than any of the individual
measurements.  It may be that such a proper definition would be
excessivly difficult to formulate, if it is possible at all.

> People who have written about automated error analysis (and its
> difficulty) include W. Kahan and Bruce Char.
> RJF

Hmm.  I'll have to give the ones I can find a read.  I have always
thought automated error analysis and significant figure evaluation
would be a big deal if it could be done, since so much of the validity
of scientific analysis depends on having meaningful data to look at. 
Such routines could also enable a researcher to enter hypothetical data
(including errors) and determine what accuracy they would need to
measure to (or how many repeated measurements) in order to obtain
sufficiently precise data for their purpose.

Has anybody read this paper?  Sounds like the Mathematica guys are
thinking about this too.

Precise numerical computation
Mark Sofronioua and Giulia Spalettab,  
Journal of Logic and Algebraic Programming 
Volume 64, Issue 1 , July 2005, Pages 113-134
Abstract
Arithmetic systems such as those based on IEEE standards currently make
no attempt to track the propagation of errors. A formal error analysis,
however, can be complicated and is often confined to the realm of
experts in numerical analysis. In recent years, there has been a
resurgence of interest in automated methods for accurately monitoring
the error propagation. In this article, a floating-point system based
on significance arithmetic will be described. Details of the
implementation in Mathematica will be given along with examples that
illustrate the design goals and differences over conventional
fixed-precision floating-point systems. 

It's on scidirect but I can't get to the pdf at the moment.

CY

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com