"Floating-point operations are not exact, but they can be modeled as:
fl( x op y ) = (1+D) (x op y)
where |D| <= u, u is the unit round off or machine precision"
I found this on the Internet and it is about running error analysis for numerical algorithms. I just want to know the value of u for big floats as a function of fpprec (as opposed to machine precision) since Maxima big floats are running in a virtual machine? Does anyone know how round off is handled with big floats.
Thanks,
Rich