### Divided differences (7333 views - Basics)

In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.
Go to Article

## Divided differences

### Divided differences

In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions.[citation needed] Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.[1]

Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.

## Definition

Given k+1 data points

${\displaystyle (x_{0},y_{0}),\ldots ,(x_{k},y_{k})}$

The forward divided differences are defined as:

${\displaystyle [y_{\nu }]:=y_{\nu },\qquad \nu \in \{0,\ldots ,k\}}$
${\displaystyle [y_{\nu },\ldots ,y_{\nu +j}]:={\frac {[y_{\nu +1},\ldots ,y_{\nu +j}]-[y_{\nu },\ldots ,y_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}},\qquad \nu \in \{0,\ldots ,k-j\},\ j\in \{1,\ldots ,k\}.}$

The backward divided differences are defined as:

${\displaystyle [y_{\nu }]:=y_{\nu },\qquad \nu \in \{0,\ldots ,k\}}$
${\displaystyle [y_{\nu },\ldots ,y_{\nu -j}]:={\frac {[y_{\nu },\ldots ,y_{\nu -j+1}]-[y_{\nu -1},\ldots ,y_{\nu -j}]}{x_{\nu }-x_{\nu -j}}},\qquad \nu \in \{j,\ldots ,k\},\ j\in \{1,\ldots ,k\}.}$

## Notation

If the data points are given as a function ƒ,

${\displaystyle (x_{0},f(x_{0})),\ldots ,(x_{k},f(x_{k}))}$

one sometimes writes

${\displaystyle f[x_{\nu }]:=f(x_{\nu }),\qquad \nu \in \{0,\ldots ,k\}}$
${\displaystyle f[x_{\nu },\ldots ,x_{\nu +j}]:={\frac {f[x_{\nu +1},\ldots ,x_{\nu +j}]-f[x_{\nu },\ldots ,x_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}},\qquad \nu \in \{0,\ldots ,k-j\},\ j\in \{1,\ldots ,k\}.}$

Several notations for the divided difference of the function ƒ on the nodes x0, ..., xn are used:

${\displaystyle [x_{0},\ldots ,x_{n}]f,}$
${\displaystyle [x_{0},\ldots ,x_{n};f],}$
${\displaystyle D[x_{0},\ldots ,x_{n}]f}$

etc.

## Example

Divided differences for ${\displaystyle \nu =0}$ and the first few values of ${\displaystyle j}$:

{\displaystyle {\begin{aligned}{\mathopen {[}}y_{0}]&=y_{0}\\{\mathopen {[}}y_{0},y_{1}]&={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\{\mathopen {[}}y_{0},y_{1},y_{2}]&={\frac {{\mathopen {[}}y_{1},y_{2}]-{\mathopen {[}}y_{0},y_{1}]}{x_{2}-x_{0}}}={\frac {{\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}-{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}}{x_{2}-x_{0}}}={\frac {y_{2}-y_{1}}{(x_{2}-x_{1})(x_{2}-x_{0})}}-{\frac {y_{1}-y_{0}}{(x_{1}-x_{0})(x_{2}-x_{0})}}\\{\mathopen {[}}y_{0},y_{1},y_{2},y_{3}]&={\frac {{\mathopen {[}}y_{1},y_{2},y_{3}]-{\mathopen {[}}y_{0},y_{1},y_{2}]}{x_{3}-x_{0}}}\end{aligned}}}

To make the recursive process more clear, the divided differences can be put in a tabular form:

${\displaystyle {\begin{matrix}x_{0}&y_{0}=[y_{0}]&&&\\&&[y_{0},y_{1}]&&\\x_{1}&y_{1}=[y_{1}]&&[y_{0},y_{1},y_{2}]&\\&&[y_{1},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&y_{2}=[y_{2}]&&[y_{1},y_{2},y_{3}]&\\&&[y_{2},y_{3}]&&\\x_{3}&y_{3}=[y_{3}]&&&\\\end{matrix}}}$

## Properties

${\displaystyle (f+g)[x_{0},\dots ,x_{n}]=f[x_{0},\dots ,x_{n}]+g[x_{0},\dots ,x_{n}]}$
${\displaystyle (\lambda \cdot f)[x_{0},\dots ,x_{n}]=\lambda \cdot f[x_{0},\dots ,x_{n}]}$
${\displaystyle (f\cdot g)[x_{0},\dots ,x_{n}]=f[x_{0}]\cdot g[x_{0},\dots ,x_{n}]+f[x_{0},x_{1}]\cdot g[x_{1},\dots ,x_{n}]+\dots +f[x_{0},\dots ,x_{n}]\cdot g[x_{n}]}$
• Divided differences are symmetric: If ${\displaystyle \sigma :\{0,\dots ,n\}\to \{0,\dots ,n\}}$ is a permutation then
${\displaystyle f[x_{0},\dots ,x_{n}]=f[x_{\sigma (0)},\dots ,x_{\sigma (n)}]}$
${\displaystyle f[x_{0},\dots ,x_{n}]={\frac {f^{(n)}(\xi )}{n!}}}$ where ${\displaystyle \xi }$ is in the open interval determined by the smallest and largest of the ${\displaystyle x_{k}}$'s.

### Matrix form

The divided difference scheme can be put into an upper triangular matrix. Let ${\displaystyle T_{f}(x_{0},\dots ,x_{n})={\begin{pmatrix}f[x_{0}]&f[x_{0},x_{1}]&f[x_{0},x_{1},x_{2}]&\ldots &f[x_{0},\dots ,x_{n}]\\0&f[x_{1}]&f[x_{1},x_{2}]&\ldots &f[x_{1},\dots ,x_{n}]\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\ldots &f[x_{n}]\end{pmatrix}}}$.

Then it holds

• ${\displaystyle T_{f+g}x=T_{f}x+T_{g}x}$
• ${\displaystyle T_{f\cdot g}x=T_{f}x\cdot T_{g}x}$
This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes form a commutative ring.
• Since ${\displaystyle T_{f}x}$ is a triangular matrix, its eigenvalues are obviously ${\displaystyle f(x_{0}),\dots ,f(x_{n})}$.
• Let ${\displaystyle \delta _{\xi }}$ be a Kronecker delta-like function, that is
${\displaystyle \delta _{\xi }(t)={\begin{cases}1&:t=\xi ,\\0&:{\mbox{else}}.\end{cases}}}$
Obviously ${\displaystyle f\cdot \delta _{\xi }=f(\xi )\cdot \delta _{\xi }}$, thus ${\displaystyle \delta _{\xi }}$ is an eigenfunction of the pointwise function multiplication. That is ${\displaystyle T_{\delta _{x_{i}}}x}$ is somehow an "eigenmatrix" of ${\displaystyle T_{f}x}$: ${\displaystyle T_{f}x\cdot T_{\delta _{x_{i}}}x=f(x_{i})\cdot T_{\delta _{x_{i}}}x}$. However, all columns of ${\displaystyle T_{\delta _{x_{i}}}x}$ are multiples of each other, the matrix rank of ${\displaystyle T_{\delta _{x_{i}}}x}$ is 1. So you can compose the matrix of all eigenvectors from the ${\displaystyle i}$-th column of each ${\displaystyle T_{\delta _{x_{i}}}x}$. Denote the matrix of eigenvectors with ${\displaystyle Ux}$. Example
${\displaystyle U(x_{0},x_{1},x_{2},x_{3})={\begin{pmatrix}1&{\frac {1}{(x_{1}-x_{0})}}&{\frac {1}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})}}&{\frac {1}{(x_{3}-x_{0})\cdot (x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\0&1&{\frac {1}{(x_{2}-x_{1})}}&{\frac {1}{(x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\0&0&1&{\frac {1}{(x_{3}-x_{2})}}\\0&0&0&1\end{pmatrix}}}$
The diagonalization of ${\displaystyle T_{f}x}$ can be written as
${\displaystyle Ux\cdot \operatorname {diag} (f(x_{0}),\dots ,f(x_{n}))=T_{f}x\cdot Ux}$.

## Alternative definitions

### Expanded form

{\displaystyle {\begin{aligned}f[x_{0}]&=f(x_{0})\\f[x_{0},x_{1}]&={\frac {f(x_{0})}{(x_{0}-x_{1})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})}}\\f[x_{0},x_{1},x_{2}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})}}+{\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})}}\\f[x_{0},x_{1},x_{2},x_{3}]&={\frac {f(x_{0})}{(x_{0}-x_{1})\cdot (x_{0}-x_{2})\cdot (x_{0}-x_{3})}}+{\frac {f(x_{1})}{(x_{1}-x_{0})\cdot (x_{1}-x_{2})\cdot (x_{1}-x_{3})}}+{\frac {f(x_{2})}{(x_{2}-x_{0})\cdot (x_{2}-x_{1})\cdot (x_{2}-x_{3})}}+\\&\quad \quad {\frac {f(x_{3})}{(x_{3}-x_{0})\cdot (x_{3}-x_{1})\cdot (x_{3}-x_{2})}}\\f[x_{0},\dots ,x_{n}]&=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod _{k\in \{0,\dots ,n\}\setminus \{j\}}(x_{j}-x_{k})}}\end{aligned}}}

With the help of a polynomial function ${\displaystyle q}$ with ${\displaystyle q(\xi )=(\xi -x_{0})\cdots (\xi -x_{n})}$ this can be written as

${\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}{\frac {f(x_{j})}{q'(x_{j})}}.}$

Alternatively, we can allow counting backwards from the start of the sequence by defining ${\displaystyle x_{k}=x_{k+n+1}=x_{k-(n+1)}}$ whenever ${\displaystyle k<0}$ or ${\displaystyle n. This definition allows ${\displaystyle x_{-1}}$ to be interpreted as ${\displaystyle x_{n}}$, ${\displaystyle x_{-2}}$ to be interpreted as ${\displaystyle x_{n-1}}$, ${\displaystyle x_{-n}}$ to be interpreted as ${\displaystyle x_{0}}$, etc. The expanded form of the divided difference thus becomes

${\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod \limits _{k=j-n}^{j-1}(x_{j}-x_{k})}}+\sum _{j=0}^{n}{\frac {f(x_{j})}{\prod \limits _{k=j+1}^{j+n}(x_{j}-x_{k})}}}$

Yet another characterization utilizes limits:

${\displaystyle f[x_{0},\dots ,x_{n}]=\sum _{j=0}^{n}\lim _{x\rightarrow x_{j}}\left[{\frac {f(x_{j})(x-x_{j})}{\prod \limits _{k=0}^{n}(x-x_{k})}}\right]}$

#### Partial fractions

You can represent partial fractions using the expanded form of divided differences. (This does not simplify computation, but is interesting in itself.) If ${\displaystyle p}$ and ${\displaystyle q}$ are polynomial functions, where ${\displaystyle \mathrm {deg} \ p<\mathrm {deg} \ q}$ and ${\displaystyle q}$ is given in terms of linear factors by ${\displaystyle q(\xi )=(\xi -x_{1})\cdot \dots \cdot (\xi -x_{n})}$, then it follows from partial fraction decomposition that

${\displaystyle {\frac {p(\xi )}{q(\xi )}}=\left(t\to {\frac {p(t)}{\xi -t}}\right)[x_{1},\dots ,x_{n}].}$

If limits of the divided differences are accepted, then this connection does also hold, if some of the ${\displaystyle x_{j}}$ coincide.

If ${\displaystyle f}$ is a polynomial function with arbitrary degree and it is decomposed by ${\displaystyle f(x)=p(x)+q(x)\cdot d(x)}$ using polynomial division of ${\displaystyle f}$ by ${\displaystyle q}$, then

${\displaystyle {\frac {p(\xi )}{q(\xi )}}=\left(t\to {\frac {f(t)}{\xi -t}}\right)[x_{1},\dots ,x_{n}].}$

### Peano form

The divided differences can be expressed as

${\displaystyle f[x_{0},\ldots ,x_{n}]={\frac {1}{n!}}\int _{x_{0}}^{x_{n}}f^{(n)}(t)B_{n-1}(t)\,dt}$

where ${\displaystyle B_{n-1}}$ is a B-spline of degree ${\displaystyle n-1}$ for the data points ${\displaystyle x_{0},\dots ,x_{n}}$ and ${\displaystyle f^{(n)}}$ is the ${\displaystyle n}$-th derivative of the function ${\displaystyle f}$.

This is called the Peano form of the divided differences and ${\displaystyle B_{n-1}}$ is called the Peano kernel for the divided differences, both named after Giuseppe Peano.

### Taylor form

#### First order

If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high relative error due to differences of similar values. However we know, that difference quotients approximate the derivative and vice versa:

${\displaystyle {\frac {f(y)-f(x)}{y-x}}\approx f'(x)}$ for ${\displaystyle x\approx y}$

This approximation can be turned into an identity whenever Taylor's theorem applies.

${\displaystyle f(y)=f(x)+f'(x)\cdot (y-x)+f''(x)\cdot {\frac {(y-x)^{2}}{2!}}+f'''(x)\cdot {\frac {(y-x)^{3}}{3!}}+\dots }$
${\displaystyle \Rightarrow {\frac {f(y)-f(x)}{y-x}}=f'(x)+f''(x)\cdot {\frac {y-x}{2!}}+f'''(x)\cdot {\frac {(y-x)^{2}}{3!}}+\dots }$

You can eliminate the odd powers of ${\displaystyle y-x}$ by expanding the Taylor series at the center between ${\displaystyle x}$ and ${\displaystyle y}$:

${\displaystyle x=m-h,y=m+h}$, that is ${\displaystyle m={\frac {x+y}{2}},h={\frac {y-x}{2}}}$
${\displaystyle f(m+h)=f(m)+f'(m)\cdot h+f''(m)\cdot {\frac {h^{2}}{2!}}+f'''(m)\cdot {\frac {h^{3}}{3!}}+\dots }$
${\displaystyle f(m-h)=f(m)-f'(m)\cdot h+f''(m)\cdot {\frac {h^{2}}{2!}}-f'''(m)\cdot {\frac {h^{3}}{3!}}+\dots }$
${\displaystyle {\frac {f(y)-f(x)}{y-x}}={\frac {f(m+h)-f(m-h)}{2\cdot h}}=f'(m)+f'''(m)\cdot {\frac {h^{2}}{3!}}+\dots }$

#### Higher order

The Taylor series or any other representation with function series can in principle be used to approximate divided differences. Taylor series are infinite sums of power functions. The mapping from a function ${\displaystyle f}$ to a divided difference ${\displaystyle f[x_{0},\dots ,x_{n}]}$ is a linear functional. We can as well apply this functional to the function summands.

Express power notation with an ordinary function: ${\displaystyle p_{n}(x)=x^{n}.}$

Regular Taylor series is a weighted sum of power functions: ${\displaystyle f=f(0)\cdot p_{0}+f'(0)\cdot p_{1}+{\frac {f''(0)}{2!}}\cdot p_{2}+{\frac {f'''(0)}{3!}}\cdot p_{3}+\dots }$

Taylor series for divided differences: ${\displaystyle f[x_{0},\dots ,x_{n}]=f(0)\cdot p_{0}[x_{0},\dots ,x_{n}]+f'(0)\cdot p_{1}[x_{0},\dots ,x_{n}]+{\frac {f''(0)}{2!}}\cdot p_{2}[x_{0},\dots ,x_{n}]+{\frac {f'''(0)}{3!}}\cdot p_{3}[x_{0},\dots ,x_{n}]+\dots }$

We know that the first ${\displaystyle n}$ terms vanish, because we have a higher difference order than polynomial order, and in the following term the divided difference is one:

${\displaystyle {\begin{array}{llcl}\forall j

It follows that the Taylor series for the divided difference essentially starts with ${\displaystyle {\frac {f^{(n)}(0)}{n!}}}$ which is also a simple approximation of the divided difference, according to the mean value theorem for divided differences.

If we would have to compute the divided differences for the power functions in the usual way, we would encounter the same numerical problems that we had when computing the divided difference of ${\displaystyle f}$. The nice thing is, that there is a simpler way. It holds

${\displaystyle t^{n}=(1-x_{0}\cdot t)\dots \cdot (1-x_{n}\cdot t)\cdot (p_{0}[x_{0},\dots ,x_{n}]+p_{1}[x_{0},\dots ,x_{n}]\cdot t+p_{2}[x_{0},\dots ,x_{n}]\cdot t^{2}+\dots ).}$

Consequently, we can compute the divided differences of ${\displaystyle p_{n}}$ by a division of formal power series. See how this reduces to the successive computation of powers when we compute ${\displaystyle p_{n}[h]}$ for several ${\displaystyle n}$.

If you need to compute a whole divided difference scheme with respect to a Taylor series, see the section about divided differences of power series.

## Polynomials and power series

Divided differences of polynomials are particularly interesting, because they can benefit from the Leibniz rule. The matrix ${\displaystyle J}$ with

${\displaystyle J={\begin{pmatrix}x_{0}&1&0&0&\cdots &0\\0&x_{1}&1&0&\cdots &0\\0&0&x_{2}&1&&0\\\vdots &\vdots &&\ddots &\ddots &\\0&0&0&0&&x_{n}\end{pmatrix}}}$

contains the divided difference scheme for the identity function with respect to the nodes ${\displaystyle x_{0},\dots ,x_{n}}$, thus ${\displaystyle J^{n}}$ contains the divided differences for the power function with exponent ${\displaystyle n}$. Consequently, you can obtain the divided differences for a polynomial function ${\displaystyle \varphi (p)}$ with respect to the polynomial ${\displaystyle p}$ by applying ${\displaystyle p}$ (more precisely: its corresponding matrix polynomial function ${\displaystyle \varphi _{\mathrm {M} }(p)}$) to the matrix ${\displaystyle J}$.

${\displaystyle \varphi (p)(\xi )=a_{0}+a_{1}\cdot \xi +\dots +a_{n}\cdot \xi ^{n}}$
${\displaystyle \varphi _{\mathrm {M} }(p)(J)=a_{0}+a_{1}\cdot J+\dots +a_{n}\cdot J^{n}}$
${\displaystyle ={\begin{pmatrix}\varphi (p)[x_{0}]&\varphi (p)[x_{0},x_{1}]&\varphi (p)[x_{0},x_{1},x_{2}]&\ldots &\varphi (p)[x_{0},\dots ,x_{n}]\\0&\varphi (p)[x_{1}]&\varphi (p)[x_{1},x_{2}]&\ldots &\varphi (p)[x_{1},\dots ,x_{n}]\\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\ldots &0&0&\varphi (p)[x_{n}]\end{pmatrix}}}$

This is known as Opitz' formula.[2] [3]

Now consider increasing the degree of ${\displaystyle p}$ to infinity, i.e. turn the Taylor polynomial to a Taylor series. Let ${\displaystyle f}$ be a function which corresponds to a power series. You can compute a divided difference scheme by computing the according matrix series applied to ${\displaystyle J}$. If the nodes ${\displaystyle x_{0},\dots ,x_{n}}$ are all equal, then ${\displaystyle J}$ is a Jordan block and computation boils down to generalizing a scalar function to a matrix function using Jordan decomposition.

## Forward differences

When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.

Note that the "divided portion" from forward divided difference must still be computed, to recover the forward divided difference from the forward difference.

### Definition

Given n data points

${\displaystyle (x_{0},y_{0}),\ldots ,(x_{n-1},y_{n-1})}$

with

${\displaystyle x_{\nu }=x_{0}+\nu h,\ h>0,\ \nu =0,\ldots ,n-1}$

the divided differences can be calculated via forward differences defined as

${\displaystyle \Delta ^{(0)}y_{i}:=y_{i}}$
${\displaystyle \Delta ^{(k)}y_{i}:=\Delta ^{(k-1)}y_{i+1}-\Delta ^{(k-1)}y_{i},\ k\geq 1.}$

The relationship between divided differences and forward differences is[4]

${\displaystyle f[x_{0},x_{1},\ldots ,x_{k}]={\frac {1}{k!h^{k}}}\Delta ^{(k)}f(x_{0}).}$

### Example

${\displaystyle {\begin{matrix}y_{0}&&&\\&\Delta y_{0}&&\\y_{1}&&\Delta ^{2}y_{0}&\\&\Delta y_{1}&&\Delta ^{3}y_{0}\\y_{2}&&\Delta ^{2}y_{1}&\\&\Delta y_{2}&&\\y_{3}&&&\\\end{matrix}}}$