Hi all,
I've using maxima for about a month in a calculus class. I'm writing
some notes to help the students and now I'm having some trouble with
the behavior of Taylor series.
If I define a function and its Taylor polinomial, I have no problem
plotting both or calculating the difference
(%i3) taylor(log(1+x),x,0,5);
(%o3) x-x^2/2+x^3/3-x^4/4+x^5/5+...
(%i5) define(g(x),taylor(f(x),x,0,5));
(%o5) g(x):=x-x^2/2+x^3/3-x^4/4+x^5/5+...
(%i6) g(2);
(%o6) 76/15
(%i8) f(2)-g(2);
(%o8) (15*log(3)-76)/15
but
plot2d[f(x)-g(x),[x,0.1,5])
plots the zero function. I can obtain the expected (at least for me)
behavior using
plot2d(f(x)-trunc(g(x)),[x,0.1,5])
Is this the right thing to do? I do not understand when it's mandatory
to use trunc with taylor and when it is optional? I mean, what's the
difference between taylor(...) and trunc(taylor(...))?
If someone it's interested the notes (in spanish) are available as a
work in progress at
http://www.ugr.es/~alaminos/docencia_2/calculo_telecomunicaciones/practicas_de_ordenador/
Of course any comments are welcome.
Thank you,
Jer?nimo.