Posts: 281
Threads: 95
Joined: Aug 2007
07/13/2022, 07:23 PM
(This post was last modified: 07/13/2022, 08:31 PM by Daniel.)
(07/13/2022, 11:54 AM)bo198214 Wrote: Oh, Daniel, I really don't know where to start - there are so many implicit assumptions in your question.
Start with my first impression: Somehow it's asking like: "I want to sanity check an algorithm that I found (after long research and encountering a lot of interesting mathematical side topic) that can compute the length of a third side of a right triangle." People would answer "well there is something that is called theorem of Pythagoras, why not use that?!"
If you only would compare numerical results of this theorem with your own algorithm, you would miss a lot! First a numerical "equality " is no proof. (You could easily be fooled for example by the half-iterate developed at 2 and 4 of sqrt(2)^x, which coincide by many digits, but in the end are essentially different functions - this was already several times said on the forum).
Second an algorithm is not a proof. Theorem of Pythagoras contains a proof that an algorithm will always yield the right answer.
Similar issue with the Schroeder iteration: There we have a proof of convergence of different ways of computing. We have a proof of the resulting function being analytic, etc etc
You see how much richer that is than just having numerical coincidence? Of course as a sanity check on should do some numerical comparisons.
And then there is another category like the Kneser construction: which gives proof of existence, or the Leau-Fatou construction, which proofs existence and uniqueness. But both have no easily accessible means of numerical computation. (So the only sanity check is that many mathematicians looked at it and confirmed, typically by peer-reviewed publication or teaching it at university)
With Tetration we are unfortunately mostly (except regular iteration) in the situation of having some numerical methods with no proof of convergence, with no proof of holomorphy (in case of convergence) and with no clue about identity (which algorithms converge to the same function).
So what I want to say: it is essential to understand the well-known theory (e.g. Schröder-Iteration, Kneser construction, Leau-Fatou construction) if only to have a common base of understanding each other. Otherwise we praise ourselves in our snail house, never having seen the world in comparison.
Wow, that was a disappointing response Bo. I ask a simple question and I get a bunch of random criticisms of me and my work. Good criticism is wonderful. Uninformed criticism sucks. If you want to know what mathematics you are criticizing, see Bell Polynomials of Iterated Functions. Yiannis supports my work and my paper. I submitted the paper to the Annals of Mathematics. They only publish one out of twenty papers they receive. Their one unresolved issue was that I didn't base my work on germs.
Now back to the original simple question I asked.
Daniel Wrote:My ultimate sanity test is to prove symbolically that using the Taylor's series for \( f^n(z) \) that \( f^{a+b}(z)-f^{a}(f^{b}(z))=\mathcal{O}(z^k) \) So, no talk about algorithms or numerical simulations. This identity can be used to determine if different techniques are equivalent. Also, is is always good to have more than one proof for a problem. Wasn't it quadratic reciprocity that Gauss discovered six different proofs for? Given the comments made, I assume that none of the techniques represented by current involved members of this forum can satisfy my sanity test. The exception is I believe Gottfried's matrix methods are consistent with my work. If Wolfram fixes me up with another copy of Mathematica I will contact Paulsen and ask him for copies of his Mathematica notebooks.
Edit: I just wrote Dr. Paulsen, so maybe I can arrange to get copies of the notebooks.
Daniel
Posts: 1,630
Threads: 106
Joined: Aug 2007
If there is a method to calculate f^a as a so called iteration semi group then there is already a proof that f^a o f^b = f^(a+b) because that's how an iteration semi group is defined, like it is for the Schroeder/regular iteration. So I really don't understand what sanity check you are after (that's why I said: make yourself familiar with the proofs). So I thought you meant some numerical verification. Seems you don't mean that but something symbolically - whatever that means.
And here I really ask you to be more specific, are we talking about the case b<=eta or b>eta (and please spare me to hear you considering general functions). For 1<b<eta we have the usual regular iteration at one of the two fixed points, and everything is proven and explored there (though to our dissatisfaction because there is not the one true solution) For b>eta we have a bunch of numerical methods (invented on this forum) for none of them even exists a proof of convergence - how do you want to check anything symbolically there?! And then we have the proven constructions by Kneser and Leau-Fatou which vice versa have no straight numerical implementation. So what do you want to check symbolically there? (Except RTFP - reading the fucking proof)
Posts: 281
Threads: 95
Joined: Aug 2007
(07/13/2022, 08:32 PM)bo198214 Wrote: If there is a method to calculate f^a as a so called iteration semi group then there is already a proof that f^a o f^b = f^(a+b) because that's how an iteration semi group is defined, like it is for the Schroeder/regular iteration. So I really don't understand what sanity check you are after (that's why I said: make yourself familiar with the proofs). So I thought you meant some numerical verification. Seems you don't mean that but something symbolically - whatever that means.
And here I really ask you to be more specific, are we talking about the case b<=eta or b>eta (and please spare me to hear you considering general functions). For 1<b<eta we have the usual regular iteration at one of the two fixed points, and everything is proven and explored there (though to our dissatisfaction because there is not the one true solution) For b>eta we have a bunch of numerical methods (invented on this forum) for none of them even exists a proof of convergence - how do you want to check anything symbolically there?! And then we have the proven constructions by Kneser and Leau-Fatou which vice versa have no straight numerical implementation. So what do you want to check symbolically there? (Except RTFP - reading the fucking proof)
"Read the fucking proof?" Ciao baby, I'm permanently gone.
Daniel
Posts: 1,214
Threads: 126
Joined: Dec 2010
07/13/2022, 09:01 PM
(This post was last modified: 07/13/2022, 11:01 PM by JmsNxn.)
https://math.eretrandre.org/tetrationfor...416-1.html
Assuming this is what you are talking about with the Leau-Fatou petal stuff.
Woah cool, I've never seen this before. I assumed when you wrote that, you were referring to Ecalle, but how does Ecalle work for \(b > \eta\)? I didn't realize the idea is to perturb from \(b = \eta\) and watch the petals deform. I'll have to look into that. Just being quick, do you have a link to any articles involved, because I couldn't find anything on that thread.
If what you are saying is that there exists a holomorphic iteration \(1 < b < \infty\), without the usual branching/spike at \(b = \eta\), that would be really cool. I've always reserved myself into believing that Tetration naturally has a branching problem at \(b = \eta\), to avoid that would be god like!
As not to derail this thread further, I'll refrain from pursuing this further here. Thanks though, never heard of this method before!
EDIT:
Aww damnnit, I read through further in that post and I see the statement is still correct that no such analytic continuation exists which is holomorphic at \(b=\eta\), to think otherwise would be detrimental to the foundations of how I've done complex dynamics, so there's at least that. That would've saved me a lot of headaches though. Oh well, back to continuing my current drawing board.
Posts: 1,630
Threads: 106
Joined: Aug 2007
@Daniel
Now I am really baffled. I never expected such a reaction when asking someone to make himself familiar with proofs; in resemblance to RTFM (read the fucking manual), to indicate that it is a tedious enterprise but in the end one would be rewarded with answers to the problem one has.
@James
Right, the naming was misleading, in my memory it was from Milnor. However in reality it was in an article by Shishikura. I excerpted stuff from this article in our Wiki https://math.eretrandre.org/hyperops_wik...oordinates
The keyword is "perturbed Fatou coordinates". Also I found interesting presentations in that direction under the keyword "parabolic Implosion" on the web.
Posts: 376
Threads: 30
Joined: May 2013
07/14/2022, 02:45 PM
(This post was last modified: 07/14/2022, 02:47 PM by MphLee.)
Excuse me Daniel... let \(f:X\to X\) be a function, I'd call an iteration of \(f\) an action of an abelian group \(A\) on \(X\) s.t. for some \(u\in A\) we have \(f_u=f\). In other words, when we talk about iteration I believe that it goes without saying that we assume the law \[f_{a+b}=f_a\circ f_b\] to hold on the nose and not only asymptotically.
So my question is: why don't you ask your solutions to satisfy \(f_{a+b}=f_a\circ f_b\) directly instead of \(f_{a+b}-(f_a\circ f_b) \in \mathcal O({\rm id}^k)\)?
I'm totally newbye in this so I understand like 1% of this thread... maybe Daniel, if you could be a bit more detailed on your assumptions and "story-tell" it a bit it may be easier for non-specialists like me...
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 281
Threads: 95
Joined: Aug 2007
07/15/2022, 02:37 AM
(This post was last modified: 07/15/2022, 03:10 AM by Daniel.)
(07/14/2022, 02:45 PM)MphLee Wrote: Excuse me Daniel... let \(f:X\to X\) be a function, I'd call an iteration of \(f\) an action of an abelian group \(A\) on \(X\) s.t. for some \(u\in A\) we have \(f_u=f\). In other words, when we talk about iteration I believe that it goes without saying that we assume the law \[f_{a+b}=f_a\circ f_b\] to hold on the nose and not only asymptotically.
So my question is: why don't you ask your solutions to satisfy \(f_{a+b}=f_a\circ f_b\) directly instead of \(f_{a+b}-(f_a\circ f_b) \in \mathcal O({\rm id}^k)\)?
I'm totally newbye in this so I understand like 1% of this thread... maybe Daniel, if you could be a bit more detailed on your assumptions and "story-tell" it a bit it may be easier for non-specialists like me...
See Bell Polynomials of Iterated Functions for a complete treatment of the following material.
The following is the Taylor's series of \(f^t(z)\) with a non-superattracting fixed point translated to zero.
\(f^t(z)=f'(0)^t z + \left( f''(0) \sum_{k_1=0}^{(t-1)} f'(0)^{2t-k_1-2} \right) z^2 + \left( f'''(0) \sum_{k_1=0}^{(t-1)}f'(0)^{3t-2k_1-3}+3f''(0)^2 \sum_{k_1=0}^{(t-1)} \sum_{k_2=0}^{(t-k_1-2)} f'(0)^{3t-2k_1-k_2-5} \right) z^3+\mathcal{O}(z^4) \)
Computer algebra validates \(f^{a+b}(z)-f^a(f^b(z))=\mathcal{O}(z^k)\) out to \(\mathcal{O}(z^9)\). This sounds meager until you consider that the coefficient for \(z^8\) consists of 660,032 expressions where each expression can be an 8\(^\text{th}\) order polynomial.
Setting \(f(z)=\sin(z)\) results in
\(\sin^t(z)=z-\frac{t}{6}z^3 + \left( -\frac{t}{30}+\frac{t^2}{24}\right) z^5 + \left( -\frac{41t}{3780}+\frac{t^2}{45}-\frac{5t^3}{432}\right)z^7 + \left( -\frac{4t}{945}+\frac{67t^2}{5670}-\frac{71t^3}{6480}+\frac{35t^4}{10368}\right)z^9+\mathcal{O}(z^{11})\)
See iterated sin function for more information including a very short Mathematica program that computes the coefficients of the iterated sin function.
Why not prove \(\sin^{a+b}(z)=\sin^a(\sin^b(z))\) instead of \(\sin^{a+b}(z)-\sin^a(\sin^b(z))=\mathcal{O}(z^k)\)? Mathematica has a difficult time of comparing two different complex expressions for equality, but by using the version that used subtraction and repeated simplification, terms repeatedly cancel out until we are just left with a big O expression.
Why do I care? Because I am guarded against two things, the movement between use of Schroder's and Abel's functional equations; that topological conjugacy is understood and respected. Then I recently read Schroeder's Equation and the connection between Schroeder's and Abel's equation. My second concern is that proofs based on Kneser's paper may well begin satisfying \(f^{a+b}(z)=f^a(f^b(z))\), but I question whether the identity survives the mapping to the unit circle and then into the real line.
Daniel
Posts: 903
Threads: 130
Joined: Aug 2007
07/15/2022, 10:03 AM
(This post was last modified: 07/15/2022, 10:06 AM by Gottfried.)
Hmm, one more comment, why the property \( f°^{a+b}(z) = f°^b(f°^a(z)) \) holds for \( f(z) = \sin(z) \)
The Carleman-ansatz gives a triangular Carlemanmatrix, say \( S \) for the sine-function. \( S \) however has the diagonal of \( 1 \), so the diagonalization (which were then the operationalizing of the Schroeder-mechanism) cannot be applied here.
But for any finite size of \( S \) we can determine exactly the matrix logarithm using the Mercator-series for the \( \log(1+x) \) applied to the matrix \( S - I \). We get then -say-
\[ L = \text{matlog}(S) \tag 1\]
(Note: we generalize this to the case of infinite size because the entries in the matrix do not change if we increase size; this is due to triangularity of and the unit diagonal in \( S \) )
Equivalently we can apply the powerseries for the exponential-function to \( L \) getting exact, rational, coefficients.
We can thus formulate for some (integer or fractional) iteration-height \( h \)
\[ S^h = \text{matexp}( h \cdot L) \tag 2
\]
In Pari/GP the parameter \( h \) in this operations can be left symbolic so we get exact rational expressions in \( S^h \) - and the relevant coefficients \( s_{r,h} \) are in the second column and r'th row of \( S^h \) . They are exactly the polynomials in \(h \) as shown by Daniel (and are also well known elsewhere).
By this it is obvious that
\[ S^a \cdot S^b = \text{matexp}( b \cdot L) \cdot \text{matexp}( a \cdot L) = \text{matexp}( ( a + b) \cdot L) = S^{a+b} \tag 3
\]
down to the level of the coefficients of the powerseries-to-be-generated by the common rule of the exponential \( e^{\lambda a} \cdot e^{\lambda b} = e^{\lambda (a+b)} \).
- - - - - - - -
Whether the so-found powerseries \[ \sin°^h(z) = s_{1,h} \cdot z + s_{2,h} \cdot z^2 + s_{3,h} \cdot z^3 + ... \tag 4 \] can be evaluated for fractional values \(h \) and \( z \) other than zero is another question and needs then (strong) procedures of divergent summation (I have applied selfmade such procedures and have approximated examplaric values for instance for \( h =0.5 \)) . Perhaps (or "likely"?) there are more sophisticated procedures for the evaluation of that powerseries around, but I didn't search for that so far.
See also my answer1 and/or answer2 in MO where this is a bit described and in which complete sequence of answers are many more valuable informations.
Gottfried
Gottfried Helms, Kassel
Posts: 281
Threads: 95
Joined: Aug 2007
07/15/2022, 11:22 AM
(This post was last modified: 07/15/2022, 12:40 PM by Daniel.)
(07/15/2022, 10:03 AM)Gottfried Wrote: Hmm, one more comment, why the property \( f°^{a+b}(z) = f°^b(f°^a(z)) \) holds for \( f(z) = \sin(z) \)
The Carleman-ansatz gives a triangular Carlemanmatrix, say \( S \) for the sine-function. \( S \) however has the diagonal of \( 1 \), so the diagonalization (which were then the operationalizing of the Schroeder-mechanism) cannot be applied here.
...
See also my answer1 and/or answer2 in MO where this is a bit described and in which complete sequence of answers are many more valuable informations.
Gottfried Thank you for your posting Gottfried. I will be giving your material a close reading and repeated reading. I have long felt that our work is similar, you just attack iteration using matrices. I am fascinated by your MO postings on the fractional iteration of the sin function! We derive the same \(sin^n(z)\)! Very cool!!!
Gottfried, do you have any feel for which techniques discussed and developed here give equivalent results, at least for the sin function?
Daniel
Posts: 376
Threads: 30
Joined: May 2013
Hi Daniel. When using mathmode inline DO NOT USE single dollar sign,
use instead slash+round bracket
Let's see if I'm getting this correctly. Expressions for non-integers iterations \(f^t(x)\) for \(t\) not integer, involve powerseries... thus infinite amount of terms to be summed.
No formal, abstract proof of convergence is available. No formal abstract proof of the identity of the powerseries associated with \(f^af^b\) and the powerseries for \(f^{a+b}\) is available, thus the comparison of the the value of the two infinite expression are made via direct computation of the \(N\)-truncation of the sums...
So in alternative to direct formal proof of power-series identity you use an experimental approach looking for concordance of the result up to some big O?
I hope I'm on the right track... I need to study a lot more but...
Trying to read Gottfried... doesn't the problem lies into going from finite approximations to infinite "divergent summation techniques"? Maybe is there where we lack of formal proofs that those semi-group identities holds?
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
|