On Wed, Apr 11, 2007 at 11:13:00AM -0500, Jay Belanger wrote:
>
> Interesting. I realize that calculators typically only give
> approximations, but when does it give the wrong answer when you'd
> expect the right one? (I have no doubt such cases exist; I just don't
> know one offhand.)
You may be interested in this supplement to a common calculus
textbook:
http://www.stewartcalculus.com/data/default/upfiles/LiesCalcAndCompTold.pdf
In general, pocket calculators use decimal arithmetic because the
people using pocket calculators expect them to behave that
way. Computers use binary arithmetic because it is more efficient to
create computers that use binary numbers. Pocket scientific
calculators seem to have about 10 decimal digits.
When computers PRINT numbers, they convert to decimal representations,
but that conversion requires additional rounding above and beyond the
computational rounding.
Whenever you have a result which would come out as a terminating
decimal with less than 16 or so digits the computer will calculate
something which is slightly different on the order of (1+-eps) where
eps is a small number on the order of 2^-53 (for 64 bit floats) or
so. If this result alters your number in such a way that it rounds
differently when displayed, you may see an apparently large error in
display whereas the difference between the two numbers taken to large
precisions would still be quite small...
for example:
1.105 rounds to 1.11 in grade school rounding
1.104999999999353 rounds to 1.10
If you display only 2 digits past the decimal it looks like the
difference is .01/1.1 ~ 10^-2 but in fact the error internally is much
smaller, on the order of 10^-12
This is so frequently confusing that there should be a special week of
high school algebra class which discusses these issues. The earlier
people learn it the better.
--
Daniel Lakeland
dlakelan at street-artists.org
http://www.street-artists.org/~dlakelan