The ultimate sanity check Gottfried Ultimate Fellow Posts: 903 Threads: 130 Joined: Aug 2007 07/15/2022, 03:01 PM (This post was last modified: 07/15/2022, 03:03 PM by Gottfried.) (07/15/2022, 11:22 AM)Daniel Wrote: Gottfried, do you have any feel for which techniques discussed and developed here give equivalent results, at least for the sin function?  Well, Daniel, I never did this  conclusively. But I had some identities based on analysis of the involved series. I've done what came up sometimes and put it on my webspace; partially with -perhaps- ridiculous low level of mathematical rigorosity (and I knew that ... so I didn't make big trumpet about it).    That analyses ran mostly towards comparing some method with the diagonalization, which over the years seemed to me the most complete formalism to express our iterations.        Keywords may be: my found interpolation ansatz "exponential polynomial interpolation" how I christened it, finding it is identical with the diagonalization (this includes the ansatz using q-binomials tried by Vladimir Reshetnikov in MO which we talked about recently), I found that the use of Newton-composition on a series of integer iterates finally is convertible with diagonalization, only the order of approximation is changed and so convergence issues might better or worse be matched by this or that method.           I tried to find compatibility of Andrew's ansatz (=Peter Walker's 2nd method) with diagonalization but couldn't make it match, instead it seemed that the evaluations via Andrew's ansatz and the diagonalization run into systematically different approximations, but could not make out the exact point where the difference occurs - and whether there is perhaps only one missing link, only one missing term in some series. Related with that ansatz of Andrew was one of mine using composition of the series-of-integer-iterates (I called this "AIS" - from "alternating iteration series", because if it is alternating it is easier to apply summation procedures at it than when the iteration series does not use alternating signs). (I don't have the workouts about Andrew's method and my AIS experimenting on the webspace, but I think the main aspects are here available in forum-contributions). Very basic ideas on diagonalization (which came out to be finally equivalent to the Schroeder mechanism) are in the small essay "continuous functional iteration" and the most complete description of the set of coefficients in the diagonalization is surely "Eigensystem decomposition"             Just for the moment, possibly I can say more later/tomorrow - Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 903 Threads: 130 Joined: Aug 2007 07/15/2022, 04:13 PM (07/15/2022, 02:20 PM)MphLee Wrote: (...) Trying to read Gottfried... doesn't the problem lies into going from finite approximations to infinite "divergent summation techniques"? Maybe is there where we lack of formal proofs that those semi-group identities holds?  Hi MphLee -  the problem of finite matrices / power series, where infinite are required is really a point.   There are two aspects:  1) can we approximate from truncated series? For convergent series this is common practice; for specific types of divergence, for instance alternating geometric serie, this is also common. Especially if we know the "form" of the "general term" (as L. Euler coined it) in the case the matrix/the powerseries would be infinite.   But with our exponential-function it has been proven that fractional iterates produce so badly diverging series that there is much area to explore here... Well, for instance for the much divergent alternatig series of factorials Euler found a closed form which allows to handle that sometimes.          But the coefficients in the fractional powers of the Carlemanmatrix for $e^x-1$ diverge even more... so any summation method which extrapolates a limit from a finite number of terms must come with a very diplomatic exposition of the conjectured values... ;-)    But if inserting such an approximated value in the series again, when then a reasonable value comes out, then we have at least an argument that our approximated value is not completely off-the-road... and may be published (with all cautions) 2) The second aspect it much more difficult, and I have very rarely a handling for this: if the Carlemanmatrix is not triangular but square, then it might absolutely crap to try to extrapolate from finite sizes to the unknown infinite size case. It seems, that for instance Andrew's ansatz and his extrapolation to matrices of infinite size is systematically wrong by some tiny difference (this problem has also Peter Walker in his article considered). (I can say more later, I just have to interrupt this). Gottfried Gottfried Helms, Kassel MphLee Long Time Fellow Posts: 376 Threads: 30 Joined: May 2013 07/15/2022, 04:21 PM Ok Gottfried, thank you for you time, I'll go back reading walker. I remember when you and Sheldon were discussing via zoom the paper. I'll get back to that. MSE MphLee Mother Law $(\sigma+1)0=\sigma (\sigma+1)$ S Law $\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)$ bo198214 Administrator Posts: 1,630 Threads: 106 Joined: Aug 2007 07/16/2022, 01:41 PM I am copying a bit from my (rather notepad) article https://www.researchgate.net/profile/Dmi...ration.pdf to have some preliminaries about formal power series calculation, going from addition to multiplication to composition: \begin{align}   \left(\sum_{n=0}^\infty f_{n} x^n\right) + \left(\sum_{n=0}^\infty     g_n x^n\right)&=\sum_{n=0}^\infty (f_{n}+g_{n})x^n & (f+g)_n &=   f_{n} + g_{n}\\   \left(\sum_{n_1=0}^\infty f_{n_1} x^{n_1}\right)   \left(\sum_{n_2=0}^\infty g_{n_2} x^{n_2}\right)&=   \sum_{n,m=0}^\infty f_{n_1} g_{n_2} x^{n_1+n_2} & (fg)_n &= \sum_{n_1+n_2=n} f_{n_1} g_{n_2} \end{align} When taking powers, note that ${f^m}_{n}$ means the $n$-th coefficient of the $m$-th power of $f$, while ${f_{n}}^m$ means the $m$-th power of the $n$-th coefficient of $f$. Deriving from the multiplication identity we get  \begin{align}   {f^m}_{n} = \sum_{n_1+\dots+n_m=n} f_{n_1}\dotsm f_{n_m} =   \sum_{\substack{m_1+2m_2+\dots+nm_n = n\\m_0+\dots+m_n=m}}     \frac{m!}{m_0!\dotsm m_n!}{f_{0}}^{m_0}\dotsm {f_{n}}^{m_n} \end{align} This enables a formula for composition \begin{align*}   f(g(x))&=\sum_{m=0}^\infty f_{m} g(x)^m &   (f\circ g)_{n} &= \sum_{m=0}^\infty f_{m} {g^m}_n \end{align*} which may however problematic as we dont know about the convergence of each coefficient. A minor constraint on $g$ however makes the coefficients finite expressions. If $g_{0}=0$ then ${g^m}_{n} = 0$ for n1\\ \end{align} But now compare that to \begin{align} g_n &= \frac{1}{{f_1}^n - f_1} \left( f_n {g_1}^n - g_1 f_n + \sum_{m=2}^{n-1} f_m {g^m}_n - g_m{f^m}_n\right), n>1 \end{align} and show that they are equal! (for \(g_1=\sqrt{f_1}) This seems ultra hard.  Luckily we don't need to do it "by hand" but know by the previous considerations that $f^{\mathcal{R}\frac{1}{2}}\circ f^{\mathcal{R}\frac{1}{2}}=f$ and as the powerseries is determined by $g\circ g=f$ we know that both must be equal. Also it seems nearly impossible to prove $f^{\mathcal{R}s+t} = f^{\mathcal{R}s}\circ f^{\mathcal{R}t}$ by arithmetical transformations. bo198214 Administrator Posts: 1,630 Threads: 106 Joined: Aug 2007 07/17/2022, 10:08 AM (07/15/2022, 02:37 AM)Daniel Wrote: My second concern is that proofs based on Kneser's paper may well begin satisfying $f^{a+b}(z)=f^a(f^b(z))$, but I question whether the identity survives the mapping to the unit circle and then into the real line. Yes, it survives the mapping. I made a sequence of posts explaining the Kneser-method, you would find the explanation here. The essential thing is that the region $L$ that is mapped consists of repeating stripes and hence L+c=L. Hence the mapping $\gamma=\beta\circ\alpha$ to the upper half plane can be choosen to be $\gamma(z+c)=\gamma(z)+1$ which in turn make the "pre-Abel" function $\psi(\exp(z))=\psi(z)+c$ into an Abel function $\Psi=\gamma\circ\psi$. « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post The ultimate beta method JmsNxn 8 3,635 04/15/2023, 02:36 AM Last Post: JmsNxn ultimate equation tommy1729 0 4,193 01/26/2011, 11:28 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s)