Big float Round off errors



>  but if you are having problems summing alternating series, you could try
>  adding up the positive terms separately from the negative ones

Really?  I would have thought that that would be the *worst* order.
Isn't it better to combine terms that nearly cancel first, starting
with the smallest?

In a quick experiment with sum((-.999999)^i,i,1,1024), I tried several
different orders:

-- ((a1+a2)+a3)+a4+... (lreduce)
-- (a1+a3+a5+...)+(a2+a4+a6+...)  (sum pos and neg terms separately)
-- ((a1+a2)+(a3+a4))+(a5+a6)+...  (sum adjacent terms first, then lreduce)
-- ((((((a1+a2)+(a3+a4))+ ... (tree_reduce)

using floating-point arithmetic and compared with the exact
(rationalized float) and bfloat (fpprec=100) results (where the
exponentiation was done in float, not rat/bfloat, so that we are only
evaluating the *summation*).

The relative error of the separate pos/neg sum was 10^-10, while the
other calculations were exactly the same as the exact/bfloat result
(rounded to float) -- more accurate than using the closed form!
Similar results for other N.

Naturally, other series will have other characteristics.

           -s


pl(l):=lreduce("+",l,0)$
	
test1(a,n,conv):=
([
 alllist: makelist(apply(conv,[a^i]),i,1,n),
 pluslist: sublist(alllist,lambda([q],q>=0)),
 minuslist: sublist(alllist,lambda([q],q<0))
 ],
[pl(alllist),
 pl(pluslist)+pl(minuslist),
 pl(pluslist+minuslist),
 tree_reduce("+",alllist,0)]);

call using e.g. test1(-.999999,1000,'float) or 'bfloat or 'rationalize