all that rootfinding



On 1/10/2013 4:54 AM, Henry Baker wrote:
> The problem with slow convergence in the presence of multiple roots persists even when the roots are merely _close_ to one another.  If your rootfinder starts far enough away from two close roots, convergence is just as slow as if they were repeated roots.
the advantage of a computer algebra system here is that roots are close 
relative to some precision, and
a CAS can increase the precision.

Thus  3.01 and 3.02  look pretty close, but
  3.010000000000000000   and 3.020000000000000 are in some computational 
sense,
pretty far apart.


>
> Given p(x), what you really want is a "fuzzy" fgcd(p(x),p'(x)), such that the remainder of the division quotient(p(x),fgcd(p(x),p'(x))) is "small", and you can use this quotient to get close approximations to the non-"repeated" roots.
>
> Once you've found good approximations to the non-"repeated" roots, you can then zoom in on the close roots by dividing out the non-"repeated" roots.
>
> I.e., you look at the rational function p(x)/(x-nr1)/(x-nr2)/(x-nr3)..., where nr1, nr2, nr3 are non-repeated roots.
>
> None of these roots will yet be high precision.  But once you have a isolated/separated the roots into regions, you can now "polish" the roots to high precision by Newton iterations.  These Newton iterations will converge quadratically _if you have previously separated the roots_, i.e., you have your approximation to a high enough precision that it is no longer close to any other root.

The traditional computer algebra approach is not to fuzzify anything, 
but to treat all inputs as though they
were exact rationals. It is possible to compute isolating intervals for 
all distinct zeros by using Sturm sequences,
as is done by maxima's  realroots program.  If there is only one root in 
an interval, you can do Newton iteration
to your heart's content.  This (sturm sequences) is not cheap and so has 
not caught on in the numerical world.

RJF