ev does do the trick. Thanks.
Some comments:
(1) While it is good that the interpol routines are available, they seem
to be much too slow for my purposes.
My application is actually to finding the intersection of a pair
of parametrized curves in the plane where one of the curves is not
too far from a line, but the other curve oscillates a lot (so there
are hundreds of points of intersection). This has to be done for
many such pairs of curves, so speed is a factor. I know there are
many multidimensional routines available and that this is a classical
interpolation problem. It could probably be easily done using matlab
or octave. Then, higher precision is not an option.
But my curves are obtained using routines in maxima. I
could rewrite the routines, (say in C or Fortran), but I was hoping
for some convenient way to do this in maxima, perhaps with some lisp
routines. I looked at Kevin Broughan's Senac routines, but the ones
I tried did not work. Instead of hacking those (since I don't need
most of the routines), it probably would be better for me to write my own.
It may be that the best way to proceed is to write the
routines in lisp and call them up or use some foreign function
interface to C or Fortran.
(2) Back to the "interpol" routines. It seems to me that they should
be rewritten using search tools. Suppose that the data are given as
(using latex notation) $$ \{ I_i = (x_i, y_i) \} $$
One should first define the affine functions, say $g_i$ for the
intervals $I_i$.
If $g$ is the final interpolation function
(a) $$ g(x) = \sum charfun(I_i)g_i(x)$$
then to evaluate g(x), one should first
(b) search to find which interval $I_j$ contains $x$,
then
(c) evaluate $g_j(x)$.
Ditto for other one dimensional interpolation functions.
Computing the sum in (a) wastes a lot of time and computing
resources. I am not sure which search tools are best for the one
dimensional search routine. Do you have suggestions?
-sen
> On 12/29/06, sen1 at math.msu.edu <sen1 at math.msu.edu> wrote:
>
>> I have a linear interpolation function f made from around 500 data
>> points in the plane
>
> I'm curious about the application -- is it function approximation,
> statistics, or what?
>
>> Using linearinterpol gives a function with many summands (as a sum of
>> characteristic functions of affine maps). I have two questions.
>>
>> 1. In actually evaluating the function f at some point x there seems
>> to be a lot of wasted time since a lot of zeroes are being summed.
>> Is there a better way, short of rewriting the routine?
>
> I'm pretty sure determining which polygon a point falls
> into is a well-known problem in computational geometry.
> Presumably it is a simple matter of programming to go from
> characteristic functions to polygons ...
>
>> 2. Even using the routine, the answer shows up as a sum of many
>> numbers times characteristic functions. How do I see the actual
>> sum as a floating point number?
>
> Trying the example shown by ? linearinterpol, I get
>
> - ((9 x - 39) charfun2(x, minf, 3)
> + (30 - 6 x) charfun2(x, 7, inf)
> + (30 x - 222) charfun2(x, 6, 7)
> + (18 - 10 x) charfun2(x, 3, 6))/6
>
> I think ev(<charfun mess>, x = 3.5) for example should cause that
> to yield a number.
>
> One way to simplify away some zero terms is to use assume, e.g.
>
> assume (x > 5.5);
> load (boolsimp);
> ev (<charfun mess>);
> => - ((18 - 10 x) charfun(x < 6)
> + charfun(6 <= x and x < 7) (30 x - 222)
> + charfun(7 <= x) (30 - 6 x))/6
>
> (As it stands, Maxima dislikes some unevaluated Boolean
> expressions, hence boolsimp.)
>
> Hope this helps
> Robert
>
--
---------------------------------------------------------------------------
| Sheldon E. Newhouse | e-mail: sen1 at math.msu.edu |
| Mathematics Department | |
| Michigan State University | telephone: 517-355-9684 |
| E. Lansing, MI 48824-1027 USA | FAX: 517-432-1562 |
---------------------------------------------------------------------------