The ultimate sanity check
#21
(07/15/2022, 11:22 AM)Daniel Wrote: Gottfried, do you have any feel for which techniques discussed and developed here give equivalent results, at least for the sin function?

 Well, Daniel, I never did this  conclusively. But I had some identities based on analysis of the involved series. I've done what came up sometimes and put it on my webspace; partially with -perhaps- ridiculous low level of mathematical rigorosity (and I knew that ... so I didn't make big trumpet about it).   

That analyses ran mostly towards comparing some method with the diagonalization, which over the years seemed to me the most complete formalism to express our iterations.       
Keywords may be: my found interpolation ansatz "exponential polynomial interpolation" how I christened it, finding it is identical with the diagonalization (this includes the ansatz using q-binomials tried by Vladimir Reshetnikov in MO which we talked about recently), I found that the use of Newton-composition on a series of integer iterates finally is convertible with diagonalization, only the order of approximation is changed and so convergence issues might better or worse be matched by this or that method.          

I tried to find compatibility of Andrew's ansatz (=Peter Walker's 2nd method) with diagonalization but couldn't make it match, instead it seemed that the evaluations via Andrew's ansatz and the diagonalization run into systematically different approximations, but could not make out the exact point where the difference occurs - and whether there is perhaps only one missing link, only one missing term in some series. Related with that ansatz of Andrew was one of mine using composition of the series-of-integer-iterates (I called this "AIS" - from "alternating iteration series", because if it is alternating it is easier to apply summation procedures at it than when the iteration series does not use alternating signs). (I don't have the workouts about Andrew's method and my AIS experimenting on the webspace, but I think the main aspects are here available in forum-contributions).

Very basic ideas on diagonalization (which came out to be finally equivalent to the Schroeder mechanism) are in the small essay "continuous functional iteration" and the most complete description of the set of coefficients in the diagonalization is surely "Eigensystem decomposition"            

Just for the moment, possibly I can say more later/tomorrow -

Gottfried
Gottfried Helms, Kassel
Reply
#22
(07/15/2022, 02:20 PM)MphLee Wrote: (...)
Trying to read Gottfried... doesn't the problem lies into going from finite approximations to infinite "divergent summation techniques"? Maybe is there where we lack of formal proofs that those semi-group identities holds?

 Hi MphLee -

 the problem of finite matrices / power series, where infinite are required is really a point.  
There are two aspects:

 1) can we approximate from truncated series? For convergent series this is common practice; for specific types of divergence, for instance alternating geometric serie, this is also common. Especially if we know the "form" of the "general term" (as L. Euler coined it) in the case the matrix/the powerseries would be infinite.  
But with our exponential-function it has been proven that fractional iterates produce so badly diverging series that there is much area to explore here... Well, for instance for the much divergent alternatig series of factorials Euler found a closed form which allows to handle that sometimes.         
But the coefficients in the fractional powers of the Carlemanmatrix for \( e^x-1 \) diverge even more... so any summation method which extrapolates a limit from a finite number of terms must come with a very diplomatic exposition of the conjectured values... ;-)    But if inserting such an approximated value in the series again, when then a reasonable value comes out, then we have at least an argument that our approximated value is not completely off-the-road... and may be published (with all cautions)

2) The second aspect it much more difficult, and I have very rarely a handling for this: if the Carlemanmatrix is not triangular but square, then it might absolutely crap to try to extrapolate from finite sizes to the unknown infinite size case. It seems, that for instance Andrew's ansatz and his extrapolation to matrices of infinite size is systematically wrong by some tiny difference (this problem has also Peter Walker in his article considered). (I can say more later, I just have to interrupt this).


Gottfried
Gottfried Helms, Kassel
Reply
#23
Ok Gottfried, thank you for you time, I'll go back reading walker. I remember when you and Sheldon were discussing via zoom the paper. I'll get back to that.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#24
I am copying a bit from my (rather notepad) article https://www.researchgate.net/profile/Dmi...ration.pdf to have some preliminaries about formal power series calculation, going from addition to multiplication to composition:

\begin{align}
  \left(\sum_{n=0}^\infty f_{n} x^n\right) + \left(\sum_{n=0}^\infty
    g_n x^n\right)&=\sum_{n=0}^\infty (f_{n}+g_{n})x^n & (f+g)_n &=
  f_{n} + g_{n}\\
  \left(\sum_{n_1=0}^\infty f_{n_1} x^{n_1}\right)
  \left(\sum_{n_2=0}^\infty g_{n_2} x^{n_2}\right)&=
  \sum_{n,m=0}^\infty f_{n_1} g_{n_2} x^{n_1+n_2} &
(fg)_n &= \sum_{n_1+n_2=n} f_{n_1} g_{n_2}
\end{align}

When taking powers, note that \({f^m}_{n}\) means the \(n\)-th coefficient of the \(m\)-th power of \(f\), while \({f_{n}}^m\) means the \(m\)-th power of the \(n\)-th coefficient of \(f\). Deriving from
the multiplication identity we get 
\begin{align}
  {f^m}_{n} = \sum_{n_1+\dots+n_m=n} f_{n_1}\dotsm f_{n_m} =
  \sum_{\substack{m_1+2m_2+\dots+nm_n = n\\m_0+\dots+m_n=m}}
    \frac{m!}{m_0!\dotsm m_n!}{f_{0}}^{m_0}\dotsm {f_{n}}^{m_n}
\end{align}
This enables a formula for composition
\begin{align*}
  f(g(x))&=\sum_{m=0}^\infty f_{m} g(x)^m &
  (f\circ g)_{n} &= \sum_{m=0}^\infty f_{m} {g^m}_n
\end{align*}
which may however problematic as we dont know about the convergence of each coefficient. A minor constraint on \(g\) however makes the coefficients finite expressions.

If \(g_{0}=0\) then \({g^m}_{n} = 0\) for \(n<m\). Hence
\begin{align}
  (f\circ g)_{n} &= \sum_{m=0}^n f_{m} {g^m}_{n}, \quad g_0=0.
\end{align}

So much for the preliminaries about formal powerseries. Now to our real problem:

We want to define an iteration \(f^{\mathbb{R} t}\) (the \(\mathbb{R}\) is not a variable, but refers to the regular iteration) that satisfies:
\[f^{\mathbb{R} s}\circ f^{\mathbb{R} t} = f^{\mathbb{R} s+t}\]

For such an iteration must also hold:
\[f^{\mathbb{R}t}\circ f = f\circ f^{\mathbb{R} t}\]

This condition already determines the power series coeffecients up to \({f^{\mathbb{R} t}}_1\): Let \(g = f^{\mathbb{R} t}\), then we have the equation system (from above power series composition and \(g_0 = f_0 = 0\) )

\begin{align}
\sum_{m=1}^n f_{m} {g^m}_{n} &= \sum_{m=1}^n g_{m} {f^m}_{n}\\
\end{align}
and we pull out the parts that contain \(g_n\) and make a recursive formula out of it:
\begin{align}
f_1 g_n + f_n {g^n}_n + \sum_{m=2}^{n-1} f_{m} {g^m}_{n} &= g_1 f_n + g_n {f^n}_n+\sum_{m=2}^{n-1} g_{m} {f^m}_{n}\\
f_1 g_n + f_n {g_1}^n + \sum_{m=2}^{n-1} f_{m} {g^m}_{n} &= g_1 f_n + g_n {f_1}^n+\sum_{m=2}^{n-1} g_{m} {f^m}_{n}\\
g_n &= \frac{1}{{f_1}^n - f_1} \left(
    f_n {g_1}^n - g_1 f_n + \sum_{m=2}^{n-1} f_m {g^m}_n - g_m
    {f^m}_n\right)
\end{align}

This formula surely computes the same that  Mathematica calculates. (But always note these are purely formal power series, it is nothing said about their convergence. But on the other hand they are convergent, but that is proven elsewhere.)

So up to now we see that the formal powerseries \(g\) is determined, by the law \(g\circ f = f\circ g\) and its first coeffiecent \(g_1\).
So we can say that \(f^{\mathbb{R} s+t}\) is determined by this. But even so is \(f^{\mathbb{R} s}\circ f^{\mathbb{R} t}\):
\[
(f^{\mathbb{R}s} \circ f^{\mathbb{R}t}) \circ f = f^{\mathbb{R}s} \circ f \circ f^{\mathbb{R} t} = f \circ (f^{\mathbb{R}s} \circ f^{\mathbb{R}t})
\]
As we set \({f^{\mathbb{R}t}}_1 = f_1^t\) and know that \((f\circ g)_1 = f_1g_1\) the first coefficient is equal:
\[{f^{\mathbb{R}s+t}}_1= f_1^{s+t} = f_1^sf_1^t = (f^{\mathbb{R}s}\circ f^{\mathbb{R}t})_1\] and hence the other coefficients are equal too and we have our desired identity
\[f^{\mathbb{R} s+t } = f^{\mathbb{R} s}\circ f^{\mathbb{R}t}\]
Reply
#25
Which, for local iterations, the regular iteration is precisely a Schroder iteration or an Ecalle iteration (excusing super attractive). And this is precisely the break down of how bell polynomials construct iterations. And that your method, Daniel--is encapsulated within bo's post--and is the regular iteration method.

Thank you, bo. That was very precise. Beautiful exposition.
Reply
#26
I still remember those times where Tetration was in the air and around the world people (including Daniel and me) were trying to figure out what the half-iterate of a function would be.
This can also be solved as formal powerseries:
\begin{align}
g\circ g &= f\\
\sum_{m=1}^n g_m {g^m}_n &= f_n\\
g_1 g_n + g_n g_1^n + \sum_{m=2}^{n-1} g_m {g^m}_n &= f_n \\
g_1 g_1 &= f_1\\
g_n &= \frac{1}{g_1+g_1^n}\left(f_n - \sum_{m=2}^{n-1} g_m {g^m}_n\right),  n>1\\
\end{align}

But now compare that to
\begin{align}
g_n &= \frac{1}{{f_1}^n - f_1} \left(
    f_n {g_1}^n - g_1 f_n + \sum_{m=2}^{n-1} f_m {g^m}_n - g_m{f^m}_n\right), n>1
\end{align}

and show that they are equal! (for \(g_1=\sqrt{f_1}\)) This seems ultra hard. 
Luckily we don't need to do it "by hand" but know by the previous considerations that \(f^{\mathcal{R}\frac{1}{2}}\circ f^{\mathcal{R}\frac{1}{2}}=f\) and as the powerseries is determined by \(g\circ g=f\) we know that both must be equal.
Also it seems nearly impossible to prove \(f^{\mathcal{R}s+t} = f^{\mathcal{R}s}\circ f^{\mathcal{R}t}\) by arithmetical transformations.
Reply
#27
(07/15/2022, 02:37 AM)Daniel Wrote: My second concern is that proofs based on Kneser's paper may well begin satisfying \(f^{a+b}(z)=f^a(f^b(z))\), but I question whether the identity survives the mapping to the unit circle and then into the real line.

Yes, it survives the mapping. I made a sequence of posts explaining the Kneser-method, you would find the explanation here. The essential thing is that the region \(L\) that is mapped consists of repeating stripes and hence L+c=L. Hence the mapping \(\gamma=\beta\circ\alpha\) to the upper half plane can be choosen to be \(\gamma(z+c)=\gamma(z)+1\) which in turn make the "pre-Abel" function \(\psi(\exp(z))=\psi(z)+c\) into an Abel function \(\Psi=\gamma\circ\psi\).
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The ultimate beta method JmsNxn 8 3,635 04/15/2023, 02:36 AM
Last Post: JmsNxn
  ultimate equation tommy1729 0 4,193 01/26/2011, 11:28 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)