> -----Original Message-----
> From: maxima-bounces at math.utexas.edu
> [mailto:maxima-bounces at math.utexas.edu] On Behalf Of C Y
> Sent: Wednesday, April 11, 2007 1:16 PM
> To: fateman at cs.berkeley.edu; belanger at truman.edu;
> maxima at math.utexas.edu
> Subject: Re: [Maxima] strange behaviour with simple decimals
>
>
> --- Richard Fateman <fateman at cs.berkeley.edu> wrote:
>
> > 4. Maybe hack the program to do something whose deficiencies are
> > more subtle, like Mathematica's attempt to implement "significance"
> > arithmetic, a system which nearly defies explanation because, among
> > other things, 0. is used to mean "I don't know if this is
> non-zero.".
>
> Are you objecting to the notation or the concept?
Both, though frankly I don't have a better notation. The concept is
fundamentally flawed when applied to large numbers of sequential operations.
It assumes that calculations get fuzzier and fuzzier, when some of them
don't. They converge.
In most important iterative algorithms, you want to keep track of
convergence, condition number, etc.
Significance arithmetic is an expense that (in Wolfram's formulation)
detracts from the ability to compute actual accuracy because you have to
continually change the precision of the numbers you are computing with by
explicit extra operations.
I just found this via google.. I haven't studied it but the summary on page
one looks right.
http://www.av8n.com/physics/uncertainty.htm#sec-abomination
> I have been meaning
> to read up on Mathematica's significance work but have not
> had the time
> to probe it in depth. It seems reasonable in some situations to want
> the CAS to track the accuracy limits imposed by floating point
> limitations, although I concede for high performance numerical
> computation it would be a burden.
You can use interval arithmetic alternatively.
It is still a burden.
RJF
>