Thanks for your answers.
I want to investigate the effect of using a fixedpoint
approximation TD for the function Td, using a selectable
precision. Td describes the dependency of the dew point from
temperature and relative humidity of air.
Such an approximation might avoid using floating point arithmetic
to compute the dew point using a small integer only micro controller.
Temperature and humidity are coded in some integer ranges (mostly
0..(2^n)-1) generated by sensors and get converted to values
between -30 and 50 resp. 0.05 to 1, so I have a known number of
possible input values.
My idea was, to get the absolute maximum and minimum for any
sensor resolution, depending on the precision of computation and
the precision of an approximation of log, which will probably get
stored in a table.
In the meantime, I coded the min/max computation as a maxima
function. It runs quite slow and gives me the minimum and maximum
error values for a known set of input values, but that is not the
general solution, I would like to find.
Albrecht