Abel Functional Equation
#1
Introduction

In this post we will talk about the Abel Functional Equation, and to some extent, the Schroeder Functional Equation and the Boettcher Functional Equation:

\( A(f(x)) = A(x) + 1 \)
\( S(f(x)) = c S(x) \)
\( B(f(x)) = B(x)^c \)

Upon inspection we find that each is the exponential of the previous function. In other words: \( S(x) = c^{A(x)} \) and \( B(x) = b^{S(x)} \). So in theory solving any of these functional equations should produce the same results. In practice, however, different methods are used to solve each of these functional equations. As noted elsewhere in this forum, all of these work by the general principle of converting unknown iterates to known iterates, since the functions \( x \mapsto x+1 \), \( x \mapsto cx \) and \( x \mapsto x^c \) all have obvious continuous iterates. The general form of each of these functional equations (FE) is:

\( F \circ G = H \circ F \)

where G and H are known and F is the function being solved for. One thing to note about Schroeder's functional equation is that it is equivalent to using PDM/Bell/Carleman matrices and using matrix diagonalization to evaluate non-integer matrix powers. This can clearly be seen by taking the PDM of the equation:

\( \text{Pdm}[S]\text{Pdm}[f] = \text{Pdm}_x[cx]\text{Pdm}[S] \)
\( \text{Pdm}[f] = \text{Pdm}^{-1}[S]\text{Pdm}_x[cx]\text{Pdm}[S] \)
\( \text{Pdm}[f] = \mathbf{U}^{-1} \text{Pdm}_x[cx] \mathbf{U} \)

and since \( \text{Pdm}_x[cx] \) is a diagonal matrix, this is the diagonalization of \( \text{Pdm}[f] \). The first place I have found this relationship (between diagonalization and Schroeder's FE) is in the work of Aldrovandi on Bell (PDM) matrices. That is all we're going to talk about Schroeder's FE. The rest of this post is going to be about Abel's FE.

Approximate Solutions to the Abel FE

If we start with the assumption that the Abel function has a series expansion, then we can define its coefficients as follows, with an equivalent definition of f(x):

\( A(x) = \sum_{k=0}^{\infty} A_k (x-x_0)^k \)
\( f(x) = \sum_{k=0}^{\infty} f_k (x-x_0)^k \)

and when we form a sequence of the derivatives of the Abel Functional Equation, we get an infinite set of equations in an infinite number of variables:
\(
\begin{array}{rl}
A(f(x_0)) & = A(x_0) + 1 \\
A'(f(x_0))f'(x_0) & = A'(x_0) \\
A''(f(x_0))f'(x_0)^2 + A'(f(x_0))f''(x_0) & = A''(x_0) \\
\cdots & = \cdots
\end{array}
\)
the first thing to notice is that it's impossible to solve for A(x) when \( x_0 \) is a fixed-point of f(x). So keeping this in mind, we can continue to express these equations in terms of the coefficients in order to derive a more managable form:
\(
\begin{array}{rl}
A(f(x)) - A(x) & = 1 \\
\frac{\partial^n}{\partial^n x}\left[ A(f(x)) - A(x) \right]_{x=x_0} & = \delta_{n0} \\
\frac{\partial^n}{\partial^n x}\left[ \sum_{k=0}^{\infty} A_k (f(x)-x_0)^k - \sum_{k=0}^{\infty} A_k (x-x_0)^k \right]_{x=x_0} & = \delta_{n0} \\
\sum_{k=0}^{\infty} A_k \frac{\partial^n}{\partial^n x}\left[ (f(x)-x_0)^k - (x-x_0)^k \right]_{x=x_0} & = \delta_{n0}
\end{array}
\)

At this point, it should be obvious that this is a matrix equation: \( \mathbf{a}\mathbf{R} = \mathbf{b} \) where the matrix elements of R are:
\( R_{k(n+1)}[f](x_0) = \frac{\partial^n}{\partial^n x}\left[ (f(x)-x_0)^k - (x-x_0)^k \right]_{x=x_0} \)
and a is a vector of A(x) coefficients, and b is the vector (1, 0, 0, ...). However, even in this form the matrix describes an unsolvable set of equations, since \( A_0 \) plays no part in any equation, thus the point \( (x_0, A_0) \) must be given in order for the Abel function to be uniquely defined. Once this point is given, then we can use equations (n=0..4) and coefficients (k=1..5) for example, and have a 5 by 5 system of equations. Initially the definition of R seems very similar to a PDM, only it involves a subtraction. After we did the shift to allow assigning \( A_0 \), there is even less resemblance between R and a PDM, but it doesn't mean we can't derive a meaningful relationship:
\(
\mathbf{R}[f](x_0) = \mathbf{J}(\text{Pdm}[f](x_0) - \mathbf{I})
\)
where \( J_{jk} = k! \delta_{(j+1)k} \). This matrix is the core of my solution to the Abel equation, and the super-logarithm. Now for some notation. Since we will be using this matrix later we will define two noations:
  • \( \mathbf{R}[f](x_0)_n \) means the truncated matrix of R with elements \( R_{kj} \) where \( 1 < (j,k) < n \).
  • \( \mathbf{R}[f](x_0)_{nm} \) means the Cramer's rule matrix of R where \( R_{kj}[f](x_0)_{nm} = R_{kj}[f](x_0)(1 - \delta_{mk}) + \delta_{j1}\delta_{mk} \).

These definitions allow us to express the solution to a single coefficient of the infinite matrix equation (if the limit exists) through the standard Cramer's Rule as:

\(
A_k = \lim_{n \rightarrow \infty} \frac{\mathbf{R}[f](x_0)_{nk}}{\mathbf{R}[f](x_0)_{n0}}
\)

Possibly Exact Solutions to the Abel FE for \( e^x \)

In this section we will consider the special case of the Abel function for exp(x), or in other words, the natural super-logarithm. For this special case to be approachable, we will need a theorem concerning the conversion between two kinds of series. In this section we will use the term Puiseux seies for series of the form:

\(
f(x) = \sum_{k=0}^{\infty} \frac{1}{k!} \log(x)^k (f \circ \exp)^{(k)}(0)
\)

Using this definition of Puiseux, we will derive a relationship between common Taylor series (about x=1) and Puiseux series (about log(x)=0). We start with the definition of the Stirling numbers of the second kind:
\(
e^{\left( e^x - 1 \right)}
= \sum^{\infty}_{k=0} \frac{\left( e^x - 1 \right)^k}{k!}
= \sum^{\infty}_{k=0} \sum^{\infty}_{n=0} \frac{x^n}{n!} \left{{n \atop k}\right}
\)

Theorem:
For any analytic function:
\(
f(e^x) = \sum^{\infty}_{n=0} \frac{x^n}{n!} \sum^{n}_{k=0} \left{{n \atop k}\right} f^{(k)}(1)
\)
where \( \left{{n \atop k}\right} \) are Stirling numbers of the second kind.
Proof:
The double generating function of Stirling numbers of the second kind is well known [Generatingfunctionology].
The Taylor series expansion of f(x) about (x=1) is just:
\( f(x) = \sum^{\infty}_{k=0} f^{(k)}(1) \frac{\left( x - 1 \right)^k}{k!} \)
Expanding \( f(e^x) \) accordingly gives:
\( f(e^x) = \sum^{\infty}_{k=0} f^{(k)}(1) \frac{(e^x-1)^k}{k!} \)
and \( \frac{\left( e^x - 1 \right)^k}{k!} \) is the generating function for \( \left{{n \atop k}\right} \), so:
\( f(e^x) = \sum^{\infty}_{k=0} f^{(k)}(1) \sum^{\infty}_{n=0} \frac{x^n}{n!} \left{{n \atop k}\right} \)
by switching the order of summation, the theorem is obtained.

Corollary:
Given a Taylor series of f(x) about (x=1), the associated Puiseux series is:
\(
\begin{array}{|c|}
\hline \\
f(x) = \sum^{\infty}_{n=0} \frac{\log(x)^n}{n!} \sum^{n}_{k=0} \left{{n \atop k}\right} f^{(k)}(1) \\
\hline
\end{array}
\)
Proof: Trivial, substitute \( x \rightarrow \log(x) \) in the theorem.

Corollary:
Given a Puiseux series of f(x):
\( f(x) = \sum^{\infty}_{n=0} \frac{\log(x)^n}{n!} (f \circ \exp)^{(n)}(0) \)
the associated Taylor series is:
\(
\begin{array}{|c|}
\hline \\
f(x) = \sum^{\infty}_{n=0} \frac{(x - 1)^n}{n!} \sum^{n}_{k=0} \left[{n \atop k}\right] (f \circ \exp)^{(k)}(0) \\
\hline
\end{array}
\)
Proof: Trivial, the Stirling numbers of the first kind \( \left[{n \atop k}\right] \) are basically inverses of \( \left{{n \atop k}\right} \).

What this allows us to do is look at the Abel FE in a new light. Using the Taylor-Puiseux conversion detailed above, we can rewrite the Abel FE of exp(x) as:
\(
\begin{array}{rl}
A(e^x) & = A(x) + 1 \\
\sum^{\infty}_{n=0} \frac{x^n}{n!} \sum^{n}_{k=0} \left{{n \atop k}\right} A^{(k)}(1) & = \sum^{\infty}_{n=0} \frac{(x - 1)^n}{n!} A^{(n)}(1) + 1 \\
\sum^{\infty}_{n=0} \frac{(x-1)^n}{n!} \sum_{k=n}^{\infty} \left({k \atop n}\right) \sum_{j=0}^k \left{{k \atop j}\right} A^{(j)}(1) & = \sum^{\infty}_{n=0} \frac{(x - 1)^n}{n!} A^{(n)}(1) + 1
\end{array}
\)
which after replacing the derivatives with \( A_k \) notation (with \( x_0=1 \)), leads us to the recurrence equation:
\(
A_n = \sum_{k=n}^{\infty} \left({k \atop n}\right) \sum_{j=0}^k \left{{k \atop j}\right} A_j
\)
whether this is useful or not remains to be seen. Since this is original research, I still don't know about the convergence of this form, but from initial investigations, I have found it to be quite unpredictable, and not very convergent at all. Perhaps the inverse of this equation might have better convergence properties since this form seems to diverge so badly. I just wanted to put it out there to show that I have been thinking since the last time I updated my website. Smile

Andrew Robbins
http://tetration.itgo.com
Reply
#2
andydude Wrote:leads us to the recurrence equation:
\(
A_n = \sum_{k=n}^{\infty} \left({k \atop n}\right) \sum_{j=0}^k \left{{k \atop j}\right} A_j
\)

This isnt exactly a recurrence relation as on the right hand there appear \( j\ge n \). Its rather the beloved infinite linear equation system, isnt it?
Reply
#3
I sincerely apologize, I need to fix two formulas in my post above.

The first is based on a transpose of the formula I actually gave, and I forgot to change the order or matrix multiplication when I took the transpose:

\(
\mathbf{R}[f](x_0) = (\text{Pdm}[f](x_0) - \mathbf{I})\mathbf{J}
\)

Furthermore, I could be mistaken and this formula may only apply for \( x_0 = 0 \). The second formula is actually the determinant of the Cramer's Rule matrix divided by the determinant of the truncated matrix, and I forgot about the determinant when I wrote it above:

\(
A_k = \lim_{n \rightarrow \infty} \frac{\det(\mathbf{R}[f](x_0)_{nk})}{\det(\mathbf{R}[f](x_0)_{n0})}
\)

Thats all. Sorry for the mistakes.

About the comment about the last equation, yes it's basically restating the infinite system of equations in a different form. I was hoping that by changing it from an iteration/power/derivatives to a combinatorial equation, that some properties could be found that would be helpful, but so far I have found nothing to be gained by putting it in combinatorial form.

Andrew Robbins
Reply
#4
Oh, and the combinatorial equation is off by some factorials,
sorry I forgot to include them when substituting \( A_k = A^{(k)}(1)/k! \):

\(
A_n = \sum_{k=n}^{\infty} \left({k \atop n}\right) \sum_{j=0}^k \left{{k \atop j}\right} \frac{j!}{k!} A_j
\)

Andrew Robbins
Reply
#5
andydude Wrote:
\(
A_n = \sum_{k=n}^{\infty} \left({k \atop n}\right) \sum_{j=0}^k \left{{k \atop j}\right} \frac{j!}{k!} A_j
\)

It sounds a bit like nit picking, but if we solve this equation system in the natural way (as limit of equation systems of increasing size as you explained it with the Cramer's rule), do we then get the slog (as developed by you at 0) developed at the point 1?

My point is: Infinite equation systems have several solutions. For your slog you used the most natural way of solving the equation system for an development at 0. Now you show that one can impose such an equation system at any development point \( x_0 \). The question is whether the resulting developments all belong to the same analytic function (which I heavily guess.)
Reply
#6
bo198214 Wrote:Now you show that one can impose such an equation system at any development point \( x_0 \). The question is whether the resulting developments all belong to the same analytic function (which I heavily guess.)

Yes, and I can show that IF the coefficients themselves converge, THEN the infinite series (based on the exact coefficients we only have approximations of) will converge between \( x_0 \le x \le b^{x_0} \). And yes, my initial solution for the super-logarithm \( (slog_b(x)) \) was for \( x_0 = 0 \), and the combinatorial equation above is for \( x_0 = 1 \), but both should produce a single point of overlap, namely x=1.

Andrew Robbins
Reply
#7
It quite looks as if all coefficients are positive for \( 1<b\le\eta \) (and mixed for \( b>\eta \)). In this case one can sureley use the strict increase of the coefficients plus bound to show that the coefficients converge.

As far as I know there was nothing published yet about the convergence of the series (which as wrote elsewhere was already considered by Peter Walker and he stated there that the problem of convergence is open).
Reply
#8
andydude Wrote:
Introduction

In this post we will talk about the Abel Functional Equation, and to some extent, the Schroeder Functional Equation and the Boettcher Functional Equation:

\( A(f(x)) = A(x) + 1 \)
\( S(f(x)) = c S(x) \)
\( B(f(x)) = B(x)^c \)

Upon inspection we find that each is the exponential of the previous function. In other words: \( S(x) = c^{A(x)} \) and \( B(x) = b^{S(x)} \). So in theory solving any of these functional equations should produce the same results. In practice, however, different methods are used to solve each of these functional equations.

Solving these equations don't give the same results.
Schroeder Functional Equation \( S(f(x)) = c S(x) \) gives the most general solution to dynamical systems which have hyperbolic fixed points.

Abel Functional Equation \( A(f(x)) = A(x) + 1 \) results in the solution for parabolic fixed points and is used for the roots of unity.

Boettcher Functional Equation \( B(f(x)) = B(x)^c \) works for superattracting fixed points. These different solutions end up being topologically different.
Daniel
Reply
#9
Daniel Wrote:Solving these equations don't give the same results.
Hm, perhaps not the same but in certain cases equivalent results.

There is a bijection between the solutions of the Schröder equation and the Abel equation by
\( S(x)=c^{A(x)} \) and \( A(x)=\log_c(S(x)) \)
As you can see:
\( S(f(x))=c^{A(f(x))}=c^{A(x)+1}=cc^{A(x)}=cS(x) \)
and vice versa.

Of course one have to be a bit careful about negative values of S(x).
Reply
#10
Daniel Wrote:Abel Functional Equation \( A(f(x)) = A(x) + 1 \) results in the solution for parabolic fixed points and is used for the roots of unity.

You say that the Abel FE is appropriate for parabolic fixed-points, and yet the infinite system of equations is unsolvable for fixed-points, how do you use the Abel FE for fixed-points when thats exactly when its unsolvable?

Andrew Robbins
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Extended Schroeder equation Shanghai46 1 692 10/09/2023, 11:24 PM
Last Post: leon
  double functional equation , continuum sum and analytic continuation tommy1729 6 2,949 03/05/2023, 12:36 AM
Last Post: tommy1729
  tommy's displacement equation tommy1729 1 1,422 09/16/2022, 12:24 PM
Last Post: tommy1729
  Constructing an analytic repelling Abel function JmsNxn 0 1,067 07/11/2022, 10:30 PM
Last Post: JmsNxn
  Functional power Xorter 3 5,707 07/11/2022, 06:03 AM
Last Post: Catullus
Question Functional Square Root Catullus 24 12,493 07/01/2022, 09:17 PM
Last Post: tommy1729
  A special equation : f^[t](x) = f(t x)/t tommy1729 6 3,652 06/17/2022, 12:16 PM
Last Post: tommy1729
  Modding out functional relationships; An introduction to congruent integration. JmsNxn 3 3,406 06/23/2021, 07:07 AM
Last Post: JmsNxn
  [MSE] Help on a special kind of functional equation. MphLee 4 4,160 06/14/2021, 09:52 PM
Last Post: JmsNxn
  Moving between Abel's and Schroeder's Functional Equations Daniel 1 5,218 01/16/2020, 10:08 PM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)