floating-point number - hardware or lisp dependent?
Subject: floating-point number - hardware or lisp dependent?
From: Raymond Toy
Date: Thu, 04 Apr 2013 13:45:24 -0700
>>>>> "Jorge" == Jorge Calvo <Jorge.Calvo at avemaria.edu> writes:
Jorge> Hello there:
Jorge> A recent comment by Volker van Nek <volkervannek at gmail.com<mailto:volkervannek at gmail.com>> on the "scientific notation - truncation" thread (Message-ID: <CAKFbC62j2QbOcUJfrt22zhjAsSVx90jBmtOqPzVzhhLSi4adCw at mail.gmail.com<mailto:CAKFbC62j2QbOcUJfrt22zhjAsSVx90jBmtOqPzVzhhLSi4adCw at mail.gmail.com>>) reminded me of a curious issue that I discovered a few months ago while teaching a course in "scientific programming." The students in the class run Maxima on a variety of platforms (mostly Mac and PC) and as a result, from time to time there are some slight differences in the answers they get. For example, suppose that we enter:
Jorge> (%i1) 12345678.9;
Jorge> (%i2) 123456789.12345671;
Jorge> (%i3) 5e-324;
Jorge> (%i4) 4e-324;
Jorge> (%i5) 3e-324;
There are many reasons for the different answers. The first step is
converting the input from a decimal representation to the internal
floating-point representation. This is not exact for the numbers you
give. Some machines may use 80-bit or 64-bit precision to represent
floating-point numbers. Finally, this internal floating-point number
needs to be converted back to a decimal string for printing. Any one
of these stages for a given lisp and machine can cause different
results.
Ray