Ivars Wrote:Just wondering - sometimes jumping forward and looking back gives new perspective.
Since many of these are simply making the substitution \( x \to f(x) \), then yes, these are true, the question you are asking though is whether it gives new insights... I don't know. A nice part about treating numbers as functions in this way is the flexibility that power series give you. Power series allow you to put things together and take them apart, in ways that \( f(x) \) does not allow. But I think most insights on this level will end up being related to some other well-known theorem. For example, S. C. Woon's expansion (here and here) depends only on applying the binomial theorem, twice.
I think that the scope of iteration theory is limited only by our understanding of power series, and the more we understand power series, the more things we can solve with continuous iteration.
As an example of an understanding of power series that would extend the scope of iteration theory, take this. Many people (like GFR) on this forum have noticed that we have entirely restricted ourselves to functions of one variable, and this is because almost all useful results in iteration theory are in terms of a function of one variable. But what if we chose not to make this restriction?
We would find that the function would have to be an endofunction to be iterated, meaning a function of the form \( f : A \to A \), and that the set A could be a vector if we really wanted it to be. And if you ask: "What is a vector derivative?" I would say the Jacobian matrix! And if you ask: "How do you square a vector?" I would say tensor multiplication! Using these concepts, the idea of a power series can be generalized to vector functions quite easily. The hard part, though, is trying to applying regular iteration to these functions, and generalizing the fixed-points to eigenvectors or the vector-power-series.
Using the Jacobian matrix and tensor multiplication, a vector function could be written as:
\(
\begin{tabular}{rl}
\mathbf{F}(\mathbf{x})
& = \mathbf{F}(\mathbf{0}) \\
& + (J_x\mathbf{F}(\mathbf{0}))\cdot\mathbf{x} \\
& + \frac{1}{2!}((J_xJ_x\mathbf{F}(\mathbf{0}))\cd\mathbf{x})\cd\mathbf{x} \\
& + \frac{1}{3!}
(((J_xJ_xJ_x\mathbf{F}(\mathbf{0}))\cd\mathbf{x})\cd\mathbf{x})\cd\mathbf{x} \\
& + \cdots
\end{tabular}
\)
\begin{tabular}{rl}
\mathbf{F}(\mathbf{x})
& = \mathbf{F}(\mathbf{0}) \\
& + (J_x\mathbf{F}(\mathbf{0}))\cdot\mathbf{x} \\
& + \frac{1}{2!}((J_xJ_x\mathbf{F}(\mathbf{0}))\cd\mathbf{x})\cd\mathbf{x} \\
& + \frac{1}{3!}
(((J_xJ_xJ_x\mathbf{F}(\mathbf{0}))\cd\mathbf{x})\cd\mathbf{x})\cd\mathbf{x} \\
& + \cdots
\end{tabular}
\)
where
\(
\mathbf{F}(\mathbf{x}) =
\left[
\begin{tabular}{ccc}
f_1(x_1, x_2, x_3) \\
f_2(x_1, x_2, x_3) \\
f_3(x_1, x_2, x_3)
\end{tabular}
\right]
\)
\mathbf{F}(\mathbf{x}) =
\left[
\begin{tabular}{ccc}
f_1(x_1, x_2, x_3) \\
f_2(x_1, x_2, x_3) \\
f_3(x_1, x_2, x_3)
\end{tabular}
\right]
\)
Andrew Robbins

