Taylor's Theorem
For an
times differentiable function
and for
and
in an open
interval on the real line, there exists
between
and
we have
where the remainder is given by
Applications of Taylor's theorem
The practical application of Taylor's theorem is to provide a ready alternate representation of a function by expanding that function about a given point. The
number of terms of the Taylor series expansion reflects the number of continuous derivatives that the function being expanded has at the point that it is being
expanded about.
Taylor's theorem forms the foundation of a number of numerical computation schemes, including the approximation of smooth functions, the formulation of finite-difference
methods, and the formulation of optimization algorithms.
The complex form of the Taylor's series for a complex valued function of a complex valued function converges if and only if the function is analytic within a neighborhood of the point about, the function is being expanded.
Taylor series as an infinite series in 1D
A real valued function
can be expressed in terms of the value of the function and its derivatives at any point
. In one variable this is
where ! denotes factorial (e.g.,
).
By the ratio test of convergence, the Taylor series converges for values of
where the ratio of the
-th and the
-th terms is less than 1. That is:
The Maclaurin series is the special case where b=0.
Proof of the 1D Taylor Theorem
N=1 Case
For
we consider the
We define a new function
such that
.
Differentiating with respect to
we obtain
Now, we construct the function
such that
and
Because
is not constant, by Rolle's theorem
must have an extremum at
some value
between
and
, so
and
solving for the remainder
, yielding
Arbitary N Case
This same method generalizes to the case of arbitrary
,
We define a new function
such that
as
As above, we differentiate
with respect to
to yield
.
As in the
case, we construct the function
such that
and
.
Because
is not constant, it must have an extremum at a point
somewhere
between
and
.
Solving for
completes the proof
Proof of the infinite series form of Taylor's theorem in 1D
By the fundamental theorem of calculus we note that
implying that
Our method of constructing the Taylor's series will be by repetitive integration by parts of the remainder term, where we
introduce
We integrate the integral remainder term by parts, integrating the
term and differentiating
the
term we have
This process may be applied repetitively to yield the familiar form of the Taylor expansion. Suppose this has been done
times, then the remainder term
Taylor's series of an analytic function
Given a function
analytic inside some region
of the complex
plane,
then we may write
where the series converges in a disc, centered at the value
, contained within
.