08/10/2007, 08:39 PM

Introduction

Definition: An analytic function with a parabolic rationally neutral fixed-point is a function f(x) that is complex-analytic in x, and satisfies \( f(x_0) = x_0, f'(x_0) = 1 \). D. Geisler calls the iteration of this kind of function: parabolic iteration.

Notation: We use \( f(x) = x + \sum_{k=2}^{\infty} f_k (x - x_0)^k \) since this ensures that \( f(x_0) = x_0,f'(x_0) = 1 \).

Examples of analytic functions with a parabolic rationally neutral fixed-point include: the sine function sin(x) \( (x_0=0) \), the decremented exponential \( e^x - 1 \) \( (x_0=0) \), the second hyper-power \( x^x \) \( (x_0=1) \), and so on. This means each of these functions can be iterated with parabolic iteration.

Iterate-Derivative Matrices

Definition: An iterate-derivative matrix (IDM) of f(x) about \( x=x_0 \) is a matrix \( \text{IDM}[f](x_0) \) where:

\( IDM_{jk}[f](x_0) = (f^{[j]})^{(k)}(x_0)/k! \) and \( (0 \le j,k \le \infty) \). For approximations, \( \text{IDM}[f](x_0)_n \) implies \( (0 \le j,k \le n) \). If f(x) is an unnamed expression, then: \( \text{IDM}[f] = \text{IDM}_x[f(x)] \). If no value is given, then zero is assumed: \( \text{IDM}[f] = \text{IDM}[f](0) \).

As you can see, the name stems from the iterate \( f^{[j]} \) and the derivative \( g^{(k)} \) in the definition. For an example of an IDM for this kind of function, using \( f_k = f^{(k)}(x_0)/k! \):

\(

\text{IDM}[f](x_0)_4 =

\left(

\begin{array}{ccccc}

x_0 & 1 & 0 & 0 & 0 \\

x_0 & 1 & f_2 & f_3 & f_4 \\

x_0 & 1 & 2f_2 & 2f_2^2 + 2f_3 & f_2^3 + 5f_2f_3 + 2f_4 \\

x_0 & 1 & 3f_2 & 6f_2^2 + 3f_3 & 9f_2^3 + 15f_2f_3 + 3f_4 \\

x_0 & 1 & 4f_2 & 12f_2^2 + 4f_3 & 30f_2^3 + 30f_2f_3 + 4f_4

\end{array}

\right)

\)

\text{IDM}[f](x_0)_4 =

\left(

\begin{array}{ccccc}

x_0 & 1 & 0 & 0 & 0 \\

x_0 & 1 & f_2 & f_3 & f_4 \\

x_0 & 1 & 2f_2 & 2f_2^2 + 2f_3 & f_2^3 + 5f_2f_3 + 2f_4 \\

x_0 & 1 & 3f_2 & 6f_2^2 + 3f_3 & 9f_2^3 + 15f_2f_3 + 3f_4 \\

x_0 & 1 & 4f_2 & 12f_2^2 + 4f_3 & 30f_2^3 + 30f_2f_3 + 4f_4

\end{array}

\right)

\)

Using Lagrange interpolation, all columns can be turned into finite polynomials:

- Column 0: \( x_0 \)

- Column 1: 1

- Column 2: \( jf_2 \)

- Column 3: \( 2\left({j \atop 2}\right)f_2^2 + jf_3 \)

- Column 4: \( \left(6\left({j \atop 3}\right) + \left({j \atop 2}\right)\right)f_2^3 + 5\left({j \atop 2}\right)f_2f_3 + jf_4 \)

One way of doing parabolic iteration is simply to use the IDM and Lagrange interpolation. Since you know that column k is going to be a polynomial in j of degree \( k-1 \) you only need to use k points for the Lagrange interpolating polynomial. Putting it all together (using \( f_k = f^{(k)}(x_0)/k! \)), we get:

\( f^{[t]}(x) = x_0 + (x - x_0) + t f_2 (x - x_0)^2 + \left( 2\left({t \atop 2}\right)f_2^2 + t f_3 \right)(x - x_0)^3 + \cdots \)

which expresses the continuous iteration of an analytic function with a parabolic rationally neutral fixed-point. For this formula to apply to continuous iteration and not just discrete iteration, the gamma function must be used to evaluate the binomial coefficient, or expand it to a polynomial first. Notice that we obtained this formula using the IDM and Lagrange interpolation. We will compare this with other formulas we will derive later.

Power-Derivative Matrices

Definition: A power-derivative matrix (PDM) of f(x) about \( x=0 \) is a matrix \( \text{PDM}[f] \) where \( PDM_{jk}[f] = (f^j)^{(k)}(0)/k! \), and \( (0 \le j,k \le \infty) \). For approximations, \( \text{PDM}[f]_n \) implies \( (0 \le j,k \le n) \). If f(x) is an unnamed expression, then: \( \text{PDM}[f] = \text{PDM}_x[f(x)] \). If \( x_0 \) is given, then: \( \text{PDM}[f](x_0) = \text{PDM}_x[x-x_0]\text{PDM}[f]\text{PDM}_x[x+x_0] \).

The PDM about \( x=x_0 \) is defined such that: \( \text{PDM}[f](x_0)^n = \text{PDM}_x[x-x_0]\text{PDM}[f]^n\text{PDM}_x[x+x_0] \) and is analogous to using a function \( g(x) = f(x + x_0) - x_0 \) such that g(0) = 0 if and only if \( f(x_0) = x_0 \). Notice that \( g^{[2]}(x) = f(f(x + x_0) - x_0 + x_0) - x_0 = f^{[2]}(x + x_0) - x_0 \), supporting said property. The reason why it is analogous is because the PDM is most well-known for turning function composition into matrix multiplication. This is also the reason for the \( 1/k! \) in the definition. This can best be seen with the following identities:

- \( \text{PDM}_x[f(g(x))] = \text{PDM}[f]\text{PDM}[g] \)

- \( \text{PDM}_x[c f(x)] = \text{PDM}_x[cx]\text{PDM}[f] \)

- \( \text{PDM}[f^{[n]}] = \text{PDM}[f]^n \)

As you can see, the PDM turns composition into matrix multiplication, and iteration into matrix powers. This is the key to understanding PDMs. Once this is understood, one can begin thinking in linear algebra instead of function theory, for they become one and the same.

History: Some variants of power-derivative matrices (PDMs) are also known as Bell matrices (if \( f(0) = 0 \)), Carleman matrices, linearization matrices, or embedding matrices. As mentioned earlier, they turn iteration into evaluating a matrix power. Matrix powers are easy to evaluate for integer exponents, but for non-integer exponents the problem can be posed as a problem of matrix diagonalization. There are three types of matrices that are easy to diagonalize: \( 2\times2 \) matrices, \( 3\times3 \) matrices, and triangular matrices.

History: According to Bennet, the first use of PDMs for continuous iteration was by Koch. Kowalski et.al. call the more general form a Carleman matrix, and along with Aldrovandi et.al. refer to the special case when \( f(0) = 0 \) as a Bell matrix. Historically, Bell matrices and Carleman matrices have been transpose to each other. Also, Bell matrices usually start with index 1 whereas Carleman matrices start with index 0. Here we use a common starting index 0 for all PDMs, since it facilitates the more general case as well.

For an example of a PDM for this kind of function, using \( f_k = f^{(k)}(x_0)/k! \):

\(

\text{PDM}[f](x_0)_3 =

\left(

\begin{array}{cccc}

1 & 0 & 0 & 0 \\

x_0 & 1 & f_2 & f_3 \\

x_0^2 & 2x_0 & 1 + 2x_0f_2 & 2f_2 + 2x_0f_3 \\

x_0^3 & 3x_0^2 & 3x_0 + 3x_0^2f_2 & 1 + 6x_0f_2 + 3x_0^2f_3

\end{array}

\right)

\)

\text{PDM}[f](x_0)_3 =

\left(

\begin{array}{cccc}

1 & 0 & 0 & 0 \\

x_0 & 1 & f_2 & f_3 \\

x_0^2 & 2x_0 & 1 + 2x_0f_2 & 2f_2 + 2x_0f_3 \\

x_0^3 & 3x_0^2 & 3x_0 + 3x_0^2f_2 & 1 + 6x_0f_2 + 3x_0^2f_3

\end{array}

\right)

\)

and an example of the simplified case when the fixed-point is zero \( (x_0=0) \):

\(

\text{PDM}[f]_4 =

\left(

\begin{array}{ccccc}

1 & 0 & 0 & 0 & 0 \\

0 & 1 & f_2 & f_3 & f_4 \\

0 & 0 & 1 & 2f_2 & f_2^2 + 2f_3 \\

0 & 0 & 0 & 1 & 3f_2 \\

0 & 0 & 0 & 0 & 1

\end{array}

\right)

\)

\text{PDM}[f]_4 =

\left(

\begin{array}{ccccc}

1 & 0 & 0 & 0 & 0 \\

0 & 1 & f_2 & f_3 & f_4 \\

0 & 0 & 1 & 2f_2 & f_2^2 + 2f_3 \\

0 & 0 & 0 & 1 & 3f_2 \\

0 & 0 & 0 & 0 & 1

\end{array}

\right)

\)

As you can see, the PDM for \( (x_0=0) \) is a triangular matrix, which as mentioned before lends itself easily to evaluating matrix powers. To illustrate, here is the same matrix raised to an arbitrary power:

\(

\text{PDM}[f]_4^t =

\left(

\begin{array}{ccccc}

1 & 0 & 0 & 0 & 0 \\

0 & 1 & tf_2 & 2\left({t \atop 2}\right)f_2^2 + tf_3 & \left(6\left({t \atop 3}\right) + \left({t \atop 2}\right)\right)f_2^3 + 5\left({t \atop 2}\right)f_2f_3 + tf_4 \\

0 & 0 & 1 & 2tf_2 & (3t^2 - 2t)f_2^2 + 2tf_3 \\

0 & 0 & 0 & 1 & 3tf_2 \\

0 & 0 & 0 & 0 & 1

\end{array}

\right)

\)

\text{PDM}[f]_4^t =

\left(

\begin{array}{ccccc}

1 & 0 & 0 & 0 & 0 \\

0 & 1 & tf_2 & 2\left({t \atop 2}\right)f_2^2 + tf_3 & \left(6\left({t \atop 3}\right) + \left({t \atop 2}\right)\right)f_2^3 + 5\left({t \atop 2}\right)f_2f_3 + tf_4 \\

0 & 0 & 1 & 2tf_2 & (3t^2 - 2t)f_2^2 + 2tf_3 \\

0 & 0 & 0 & 1 & 3tf_2 \\

0 & 0 & 0 & 0 & 1

\end{array}

\right)

\)

Another way of finding the continuous iteration of this special kind of function is using PDMs and matrix powers. Since this matrix represents increasing powers of a function and its derivatives, we only need the first power of the function to find its coefficients, which corresponds to the first row (not the zeroth). Putting it all together (using \( f_k = f^{(k)}(0)/k! \)), we get:

\( f^{[t]}(x) = x + t f_2 x^2 + \left( 2\left({t \atop 2}\right)f_2^2 + t f_3 \right)x^3 + \cdots \)

just like with the IDM method, only with \( x_0=0 \). With PDMs, however, there is an alternative to evaluating matrix powers. By representing the matrix multiplication as a single multiplication (instead of t-1 multiplications) we can turn evaluating a matrix power into a recurrence equation:

\( PDM^t_{1k}[f] = \sum_{j=0}^{\infty}PDM^{t-1}_{1j}[f]PDM_{jk}[f] \)

which after solving gives:\( f^{[t]}(x) = x + \sum_{k=2}^{\infty}x^k PDM^t_{1k}[f] \)

as required.Double-Binomial Expansions

The first to devote a full-length paper to this method was S. C. Woon. The expansion is based on the binomial theorem, and as such, is easy to understand. Woon originally described his method in terms of the iteration of operators, but here we describe it in terms of functions:

\(

\begin{array}{rl}

f^{[t]}(x)

& = (w + (f - w))^{[t]}(z) = w^t(1 + (f/w - 1))^{[t]}(x) \\

& = w^t\left(\sum_{n=0}^{\infty} \left({t \atop n}\right) \left[\frac{1}{w}f - 1\right]^{[n]}\right)(x) \\

& = w^t\left(\sum_{n=0}^{\infty} \left({t \atop n}\right)

\sum_{m=0}^{n} \left({n \atop m}\right)(-1)^{n-m}\left(\frac{1}{w}f\right)^{[m]}\right)(x) \\

& = w^t \sum_{n=0}^{\infty} \left({t \atop n}\right)

\sum_{m=0}^{n} \left({n \atop m}\right)(-1)^{n-m} w^{-m} f^{[m]}(x)

\end{array}

\)

where \( f^0(x) = x \), w is a parameter, and of course: \( \left({t \atop n}\right) = \frac{1}{n!} \prod_{k=0}^{n-1} (t - k) \). The convergence of Woon's series is dependant on w, but the convergence properties are not well known. Woon's expansion works for many analytic functions in general, but again, the convergence properties are not well known. For those with parabolic fixed-points, another expansion can be used. If \( f(0) = 0 \) and \( f'(0) = 1 \), then:\begin{array}{rl}

f^{[t]}(x)

& = (w + (f - w))^{[t]}(z) = w^t(1 + (f/w - 1))^{[t]}(x) \\

& = w^t\left(\sum_{n=0}^{\infty} \left({t \atop n}\right) \left[\frac{1}{w}f - 1\right]^{[n]}\right)(x) \\

& = w^t\left(\sum_{n=0}^{\infty} \left({t \atop n}\right)

\sum_{m=0}^{n} \left({n \atop m}\right)(-1)^{n-m}\left(\frac{1}{w}f\right)^{[m]}\right)(x) \\

& = w^t \sum_{n=0}^{\infty} \left({t \atop n}\right)

\sum_{m=0}^{n} \left({n \atop m}\right)(-1)^{n-m} w^{-m} f^{[m]}(x)

\end{array}

\)

\(

f^{[t]}(x) = \sum_{n=1}^{\infty} x^n \sum_{k=0}^{n-1}

\left({t \atop k}\right) \left({t - k - 1 \atop n - k - 1}\right) (f^{[k]})^{(n)}(0)/n!

\)

as noted by Trappmann. This expansion basically expresses the series coefficients of non-integer iterations in terms of the series coefficients of integer iterates.f^{[t]}(x) = \sum_{n=1}^{\infty} x^n \sum_{k=0}^{n-1}

\left({t \atop k}\right) \left({t - k - 1 \atop n - k - 1}\right) (f^{[k]})^{(n)}(0)/n!

\)

This brings us full circle, since the last part of the above expression is exactly the definition of an IDM. These three methods are very similar and inter-connected, and as such, are all tools of parabolic iteration.

References

R. Aldrovandi, Special Matrices of Mathematical Physics, World Scientific, 2001.

R. Aldrovandi and L.P. Freitas, Continuous Iteration of Dynamical Maps, Journal of Math. Phys. 39, 5324, 1998.

E. T. Bell, The Iterated Exponential Integers, The Annals of Mathematics, 2nd Ser., Vol. 39, No. 3. (Jul., 193, pp. 539-557.

T. Carleman, Acta Mathematica 59, 63, 1932.

D. Geisler, http://Tetration.org, (accessed: 2006).

H. v. Koch, Bil. t. Sv. Vet. Ak. Hand. 1, Math. 25, m'em. no. 5, pp. 1-24, (1900).

H. Trappmann, Arborescent Numbers: Higher Arithmetic Operations and Division Trees, http://blafoo.de/tree-aoc/main1176.pdf, (accessed: 2006).

H. Trappmann and I. Dahl, Tetration extended to real exponents, sci.math.research, http://mathforum.org/kb/message.jspa?mes...0&tstart=0, (2006).

P. Gralewicz and K. Kowalski, Continuous time evolution from iterated maps and Carleman linearization, http://arxiv.org/abs/math-ph/0002044.

H. S. Wilf, Generatingfunctionology, Academic Press, (1990).

S.C. Woon, Analytic Continuation of Operators -- Operators acting complex s-times, http://arxiv.org/abs/hep-th/9707206.