Re: printing floats, bigfloats, fortran floats



On 11/2/05, Steve Haflich  wrote:
>
> From: Stavros Macrakis 
>
> In MacLisp, the Lisp that Macsyma [sic] was written in, the default input
> base was typically 8, not 10. A final decimal point forced the decimal
> integer interpretation. Common Lisp's input base is 10, but it allows a
> final decimal point for MacLisp compatibility.
>
> This isn't quite correct. The input base in CL is initially decimal


Sorry, I should have said that CL's *default* input base is 10.

I don't know if Maxima allows the default input radix to be changed,
> but regardless, the proposed change to input syntax is not backward
> compatible. How can you be sure in some ancient but still running
> Maxima program source that there isn't an integer written with a
> training point?


We can't be sure. We can check the Share files, but that's all. On the other
hand, we may well need to define an input syntax for arbitrary base, so that
non-decimal numbers can be entered unambiguously.

I think in this case (and many others!) it is more important to conform to
new users' reasonable expectations based on other relevant environments
(Fortran, Java, C, Mathematica, Maple, ...) than it is to conform to a
standard set 35+ years ago.

I think the problem is with the printing. Format controls are
> wonderful for producing human-readable syntax, but many controls such
> as ~g do not necessarily preserve print/read consistency.


This is a well-known problem which was solved if I remember correctly in a
paper in CACM in the early 1970's, as well as by Jonl in the Maclisp system.
It is a particularly important problem for systems like Maxima which use
textual forms (not binary) as the primary form of file storage. If our
current read/print routines aren't well behaved, we should improve them!

-s