The problem with slow convergence in the presence of multiple roots persists even when the roots are merely _close_ to one another. If your rootfinder starts far enough away from two close roots, convergence is just as slow as if they were repeated roots.
Given p(x), what you really want is a "fuzzy" fgcd(p(x),p'(x)), such that the remainder of the division quotient(p(x),fgcd(p(x),p'(x))) is "small", and you can use this quotient to get close approximations to the non-"repeated" roots.
Once you've found good approximations to the non-"repeated" roots, you can then zoom in on the close roots by dividing out the non-"repeated" roots.
I.e., you look at the rational function p(x)/(x-nr1)/(x-nr2)/(x-nr3)..., where nr1, nr2, nr3 are non-repeated roots.
None of these roots will yet be high precision. But once you have a isolated/separated the roots into regions, you can now "polish" the roots to high precision by Newton iterations. These Newton iterations will converge quadratically _if you have previously separated the roots_, i.e., you have your approximation to a high enough precision that it is no longer close to any other root.
At 08:18 PM 1/9/2013, Richard Fateman wrote:
Looked at it this way, a polynomial p(x) either has exact multiple roots or not. The test is easy. If p(x) and p'(x), its derivative
>wrt x have no common factors then it has no multiple roots. So compute gcd(p,p').
>
>If the gcd is 1, then you (and you can tell your root-finder program) that there are NO multiple roots. (Oh, if gcd is not 1, it
>tells you a polynomial whose roots have multiplicity >1; you can divide out by this ,etc.)
>
>Now once you have decided there are no multiple roots, you can use a floating point program like allroots or bfallroots to
>find the roots. Any rational or integer coefficients may be converted to floats. I suppose that conversion could produce
>a situation with multiple roots, and that would be distressing. So I'm not sure how to entirely fix up the design.