I would agree that the distribution of bignums is long-tailed, so that the vast majority of bignums live on the boundary between fixnums & bignums. For this reason, it is particularly important to get short bignums to be as fast as possible.
On the other hand, when you start crunching bignums in earnest, the quadratic factor shows up pretty quickly; i.e., the time spent crunching the larger bignums starts to dominate the entire computation.
GNU MPxxx has spent many years getting these medium-to-large size bignums to run very fast, so it is impossible for Maxima to reproduce this optimization effort.
Thus, for the highest performance, it would make sense to use fixnums (60+ bits when possible), 'normal' bignums, and then GNU bignums for even larger bignums.
I've been hacking some computational geometry algorithms with exact rational arithmetic, where the numbers are routinely several hundred bits long. I'm starting to get into the regime where GNU MPxxx might be useful.
At 12:44 PM 10/20/2013, Stavros Macrakis wrote:
>3) There doesn't seem to be any compelling reason to change. (If it ain't broke, don't fix it.)