Well, here's one. You know how Taylor series are like an infinite vector space that represents a function with the basis \( x^k \) and Fourier series are like an infinite vector space that represents a function with the basis \( e^{ikx} \)? Well you could think of it not in terms of your function specifically, but instead think of it in terms of these basis representations. This is also a more general approach, rather than a specific-function-motivated approach. In fact, one could even extend Bell and Carleman matrices to such basis systems.
For Taylor Series:
For Fourier Series:
For Tetration Series:
For Iterated Exponential Series:
All of these matrices convert function composition into matrix multiplication, or equivalently, function iteration into matrix powers. Although I haven't investigated what these alternate Carleman matrices look like, I'm sure they would be much more complicated than their Taylor counterparts, since there is not nice way of inverting them. For Teylor series, inverting, or "finding coefficients" is just a matter of taking derivatives, for Fourier series "finding coefficients" is just a matter of taking a Cauchy integral. For the last two series, though, I can't think of any way of finding coefficients aside from doing a change-of-basis between these 4 basis systems.
Andrew Robbins
For Taylor Series:
\(
\begin{tabular}{rl}
f(x) & = \sum_{k=0}^{\infty} f_k x^k \\
f(x)^j & = \sum_{k=0}^{\infty} M[f]_{jk} x^k
\end{tabular}
\)
\begin{tabular}{rl}
f(x) & = \sum_{k=0}^{\infty} f_k x^k \\
f(x)^j & = \sum_{k=0}^{\infty} M[f]_{jk} x^k
\end{tabular}
\)
For Fourier Series:
\(
\begin{tabular}{rl}
f(x) & = \sum_{k=0}^{\infty} g_k \exp(ikx) \\
\exp(ijf(x)) & = \sum_{k=0}^{\infty} (FourierM[f]_{jk}) \exp(ikx)
\end{tabular}
\)
\begin{tabular}{rl}
f(x) & = \sum_{k=0}^{\infty} g_k \exp(ikx) \\
\exp(ijf(x)) & = \sum_{k=0}^{\infty} (FourierM[f]_{jk}) \exp(ikx)
\end{tabular}
\)
For Tetration Series:
\(
\begin{tabular}{rl}
f(x) & = \sum_{k=0}^{\infty} (TetraM[f]_{1k}) ({}^{k}x) \\
{}^{j}(f(x)) & = \sum_{k=0}^{\infty} (TetraM[f]_{jk}) ({}^{k}x)
\end{tabular}
\)
\begin{tabular}{rl}
f(x) & = \sum_{k=0}^{\infty} (TetraM[f]_{1k}) ({}^{k}x) \\
{}^{j}(f(x)) & = \sum_{k=0}^{\infty} (TetraM[f]_{jk}) ({}^{k}x)
\end{tabular}
\)
For Iterated Exponential Series:
\(
\begin{tabular}{rl}
f(x) & = c_0 + \sum_{k=0}^{\infty} (IterExpM[f]_{0k}) \exp_b^{k}(x) \\
\exp_b^{j}(f(x)) & = c_1 + \sum_{k=0}^{\infty} (IterExpM[f]_{jk}) \exp_b^{k}(x)
\end{tabular}
\)
\begin{tabular}{rl}
f(x) & = c_0 + \sum_{k=0}^{\infty} (IterExpM[f]_{0k}) \exp_b^{k}(x) \\
\exp_b^{j}(f(x)) & = c_1 + \sum_{k=0}^{\infty} (IterExpM[f]_{jk}) \exp_b^{k}(x)
\end{tabular}
\)
All of these matrices convert function composition into matrix multiplication, or equivalently, function iteration into matrix powers. Although I haven't investigated what these alternate Carleman matrices look like, I'm sure they would be much more complicated than their Taylor counterparts, since there is not nice way of inverting them. For Teylor series, inverting, or "finding coefficients" is just a matter of taking derivatives, for Fourier series "finding coefficients" is just a matter of taking a Cauchy integral. For the last two series, though, I can't think of any way of finding coefficients aside from doing a change-of-basis between these 4 basis systems.
Andrew Robbins

