floating-point number - hardware or lisp dependent?



Hello there:

A recent comment by Volker van Nek <volkervannek at gmail.com<mailto:volkervannek at gmail.com>> on the "scientific notation - truncation" thread (Message-ID: <CAKFbC62j2QbOcUJfrt22zhjAsSVx90jBmtOqPzVzhhLSi4adCw at mail.gmail.com<mailto:CAKFbC62j2QbOcUJfrt22zhjAsSVx90jBmtOqPzVzhhLSi4adCw at mail.gmail.com>>) reminded me of a curious issue that I discovered a few months ago while teaching a course in "scientific programming."  The students in the class run Maxima on a variety of platforms (mostly Mac and PC) and as a result, from time to time there are some slight differences in the answers they get.  For example, suppose that we enter:

(%i1) 12345678.9;
(%i2) 123456789.12345671;
(%i3) 5e-324;
(%i4) 4e-324;
(%i5) 3e-324;

Running on a Mac, I get:

(%o1) 1.2345678900000001*10^7
(%o2) 1.2345678912345673*10^8
(%o3) 4.94065645841247*10^-324
(%o4) 0.0
(%o5) 0.0

But on a Windows machine, I get:

(%o1) 1.23456789*10^7
(%o2) 1.2345678912345672*10^8
(%o3) 4.9406564584124654*10^-324
(%o4) 4.9406564584124654*10^-324
(%o5) 0.0

I initially assumed the difference was in the hardware implementation of floats, but Volker's comment makes me wonder whether the difference might be the underlying Lisps.  Can anyone shed some light on this?  Is there a way that a newbie like me can check what version of Lisp is lying under their Maxima?

Thanks for your assistance,

Jorge
--
Dr. Jorge Alberto Calvo
Associate Professor of Mathematics
Department of Mathematics and Physics
Ave Maria University

Phone: (239) 280-1608
Email: jorge.calvo at avemaria.edu<mailto:jorge.calvo at avemaria.edu>
Web: http://sites.google.com/site/jorgealbertocalvo