no more stupid errors



On 5/25/07, Thomas Widlar <twidlar at yahoo.com> wrote:
>
>  Yes, about 2 hours, with  numericalio.lisp modified 5/23/2007  4.14pm
>

a) Are you sure that the revised version of the code is being loaded and
executed properly?  Try defining it as  $read_list1 and calling that to be
sure....  But even the old version shouldn't be this slow.

b) I hope you're not trying to display the result (by having a ";" at the
end of your command) -- that can easily take large amounts of time in any
implementation.  I don't know wxMaxima; does it *always* display the
result?  If so, try ( q: read_list(...), length(q) ), which will read in the
list and then return just its length, not the whole list.

I forget -- did you say that you didn't have this problem on other versions
of Maxima besides wxMaxima?  If so, that would be strange, since the
computational core is the same.

Here are some speed comparisons on my machine (1Ghz, 500MB, Windows 2000,
GCL) using an input file very similar to yours.  It consists of 158,000
floating-point numbers which look like  1.000000001428572, arranged five to
a line, a total of 3 MB.

    580 sec     old read_list
     30         new read_list
     32         read as a Maxima list using load; file is q:[ ... ]$
      0.7       read as a Lisp list using load; file is (setq q '(...))
      0.5       just read file to EOF using Lisp read-line

Clearly there is a large overhead for using the Maxima parser rather than
the Lisp reader; that is not too surprising, since the Maxima parser is
written in Lisp, while the Lisp reader is presumably written in C, and has
access to low-level data structures.  Still, a factor of almost 50 is a bit
disappointing....

But that still doesn't explain how your system takes 240 x as long for
read_list.  It does explain why your code is faster than read_list: it uses
parsetoken, which uses the Lisp reader.  read_list uses the Maxima reader,
since it must handle bfloats as well as floats and integers.  This could be
improved in various ways if reading in large numeric files is a common
requirement.

              -s

PS parsetoken is actually buggy because it uses the Lisp parser, but I will
fix that....