Posts: 1,631
Threads: 107
Joined: Aug 2007
(08/21/2022, 01:18 AM)JmsNxn Wrote: Quick question, I'm a little confused here.
Is this still guessing the asymptotics of a "half root" at a parabolic fixed point?
Or is it something different, (sorry just a tad confused). We are looking at the growth of the coefficients of the logit, which shows quite the similar behaviour as the coefficients of the half root,
but speaks so to says about all *iterates* not only the half root. I didn't test it but I am super sure the 3rd root and 4th root, etc show the same behaviour - so the idea was to make it independent of the concrete iterative root, just dependent on the original function.
The logit is a good candidate for this - because the logit is analytic at the fixed point if and only if all regular iterates are analytic at the fixed point.
So I gave more examples for arbitrarily chosen parabolic functions where the logit shows a similar coefficient growth pattern as the logit coefficients of \(e^x-1\), where the concrete growth behaviour depends on the \(x^2\)-coefficient of the function.
But then I also gave a counter example of a parabolic function where the logit is analytic (and hence all the regular iterates), i.e. the formal powerseries converges, i.e. the coefficients don't show that growth behaviour and don't need divergent summation.
And in the last post I just looked at the logit and it's relations for this counter-example from a non-powerseries pov and how you reconstruct the Abel function from the logit.
(08/21/2022, 01:18 AM)JmsNxn Wrote: If this is happening elsewhere though; maybe Borel summation would be a valuable method of approaching fractional iteration? 
By which we could get similar Euler expressions (Like how Euler analytically defines \(\sum_k (-1)^kk! z^k\)) of half iterates (and arbitrary iterates) using some kind of modified Laplace transform. All we would need is a bound like \(j_k = O(c^kk!)\).
Actually I wonder why Gottfried didnt post any results about the divergent summations he tried.
To get the left-side and the right-side iterates maybe one needs to apply two different divergent-summations
Posts: 1,631
Threads: 107
Joined: Aug 2007
(08/20/2022, 04:19 PM)Gottfried Wrote: update: attached three articles of R.P.Brent on (computability) efficiency of composition of powerseries.
Brent: Complexity of Composition Of Powerseries 1980 (rpb050i.pdf)
Brent/Kung: Fast Algorithms for Manipulating FormalPowerseries 1978 (rpb045.pdf)
Brent/Traub: Complexity Of Composition ... 1991 (abstract) (rpb050a)
(didn't save the links from where I downloaded them, sorry, likely has/had a personal or university homepage)
Also don't know at the moment, whether he is the Brent known for the superior fast matrix operations modules...
OMG, they specifically consider regular hyperbolic and parabolic iteration! I think Brent is quite some name in the field of numerics. I remember seeing some root finding algorithm on Sage, named after him.
Posts: 904
Threads: 131
Joined: Aug 2007
08/21/2022, 11:50 AM
(This post was last modified: 08/21/2022, 12:19 PM by Gottfried.)
(08/21/2022, 08:54 AM)bo198214 Wrote: Actually I wonder why Gottfried didnt post any results about the divergent summations he tried.
upps, a little introduction into this is in this exposition .
The previous-to-last table shows Noerlund-summantion (q&d-homebrewn) with 64 coefficients of the halfiterate of exp(x)-1 - where the partial sums look good -, but increasing the number of coefficients later showed a new beginning of divergence in that partial sums.
After that, in the last table I show this with 256 coefficients and a stronger parameter for the Noerlund-sum, and here I seem to have got it.
Why not shown more results here?
This implementation of the divergent-summation-scheme is somehow similar to the simple Borel-summation as shown in K.Knopp's book (chap 13, german edition), so I've been confident that I -at least principally- knew that I can apply that method. (Note, that I did not yet have this idea for the limiting function as shown here in the current threads, this came only later 4 years or so). On the other hand I didn't see the need to document my numerical results here, since after the successful use of asymptotic series here around and even more after Sheldon's routines, there seemed to be no more need for concurring methods for actual numerical computation with never really perfect (satisfying) approximations...
In the view back it is interesting that I didn't have the idea of 4 subsequences in the coefficients, which would have made my last picture more meaningful; but the pattern there was already so remarkable and unique that I applied this representation to other series as well - trying to discern more precisely what's going on there at all - but, well, after the idea of the limiting function I think I can omit know that pictures/representations...
Gottfried Helms, Kassel
Posts: 1,631
Threads: 107
Joined: Aug 2007
08/21/2022, 02:14 PM
(This post was last modified: 08/21/2022, 02:15 PM by bo198214.)
(08/21/2022, 11:50 AM)Gottfried Wrote: upps, a little introduction into this is in this exposition .
No pictures??? 😭
Posts: 904
Threads: 131
Joined: Aug 2007
08/21/2022, 03:25 PM
(This post was last modified: 08/21/2022, 03:26 PM by Gottfried.)
(08/21/2022, 02:14 PM)bo198214 Wrote: (08/21/2022, 11:50 AM)Gottfried Wrote: upps, a little introduction into this is in this exposition .
No pictures??? ?
Hmmm - my webbrowser (firefox) correctly shows some tables and one picture. This : https://go.helms-net.de/math/tetdocs/Coe...age016.gif at the end of the exposition.
Don't know what's going wrong ???
Gottfried Helms, Kassel
Posts: 1,631
Threads: 107
Joined: Aug 2007
(08/21/2022, 03:25 PM)Gottfried Wrote: Hmmm - my webbrowser (firefox) correctly shows some tables and one picture. This : https://go.helms-net.de/math/tetdocs/Coe...age016.gif at the end of the exposition.
Don't know what's going wrong ???
I mean picture of the half-iterate! I thought all this Borel and whatever summation is to calculate values of the half-iterate?!
Posts: 904
Threads: 131
Joined: Aug 2007
08/21/2022, 06:45 PM
(This post was last modified: 08/21/2022, 06:45 PM by Gottfried.)
(08/21/2022, 04:35 PM)bo198214 Wrote: I mean picture of the half-iterate! I thought all this Borel and whatever summation is to calculate values of the half-iterate?!
Aiihh, I see. Well - the best I could do then has been to get approximations to ten or twenty digits precision (using \( x \approx 1 \) ) and even that poor result needed manual finetuning of my hot-needle procedure - as the last 256-coefficients listing in my text showed. Just for review of my then selection of parameters I used today 1024 coefficients (which I didn't have then) to re-check that parameters and -  they were not good enough... phww...
I knew that -with such need of manual finetuning of the parameters- it could not grow-out to more than a basic "proof-of-concept" (which was what I wanted basically anyway).
Well, on the other hand, using the functional relation, inserting integer iterates towards the fixpoint (near zero as far as wanted) and using the partial sums truncated where convergence seemed to occur ... you know the point, nothing better than that what is around in the forum and you fellows used happily from the beginning anyway  .
So - no need for pictures or more pictures or tables. Just too expensive, and numerically inferior. Hope that explains ...
Gottfried
Gottfried Helms, Kassel
Posts: 1,214
Threads: 126
Joined: Dec 2010
08/22/2022, 02:51 AM
(This post was last modified: 08/22/2022, 06:10 AM by JmsNxn.)
(08/21/2022, 08:54 AM)bo198214 Wrote: (08/21/2022, 01:18 AM)JmsNxn Wrote: Quick question, I'm a little confused here.
Is this still guessing the asymptotics of a "half root" at a parabolic fixed point?
Or is it something different, (sorry just a tad confused). We are looking at the growth of the coefficients of the logit, which shows quite the similar behaviour as the coefficients of the half root,
but speaks so to says about all *iterates* not only the half root. I didn't test it but I am super sure the 3rd root and 4th root, etc show the same behaviour - so the idea was to make it independent of the concrete iterative root, just dependent on the original function.
The logit is a good candidate for this - because the logit is analytic at the fixed point if and only if all regular iterates are analytic at the fixed point.
So I gave more examples for arbitrarily chosen parabolic functions where the logit shows a similar coefficient growth pattern as the logit coefficients of \(e^x-1\), where the concrete growth behaviour depends on the \(x^2\)-coefficient of the function.
But then I also gave a counter example of a parabolic function where the logit is analytic (and hence all the regular iterates), i.e. the formal powerseries converges, i.e. the coefficients don't show that growth behaviour and don't need divergent summation.
And in the last post I just looked at the logit and it's relations for this counter-example from a non-powerseries pov and how you reconstruct the Abel function from the logit.
(08/21/2022, 01:18 AM)JmsNxn Wrote: If this is happening elsewhere though; maybe Borel summation would be a valuable method of approaching fractional iteration? 
By which we could get similar Euler expressions (Like how Euler analytically defines \(\sum_k (-1)^kk! z^k\)) of half iterates (and arbitrary iterates) using some kind of modified Laplace transform. All we would need is a bound like \(j_k = O(c^kk!)\).
Actually I wonder why Gottfried didnt post any results about the divergent summations he tried.
To get the left-side and the right-side iterates maybe one needs to apply two different divergent-summations
Ok I see, I was confused.
I'd just like to add that this use of logit, is something Mphlee and I wanted to unite with Differential equations, and as the generator of a semi group.
Every differential equation:
\[
y '= f(y)\\
\]
Induces a Semi group. And every semigroup \(y(t,z)\) induces a differential equation:
\[
\frac{d}{dt} y(t,z) = f(y(t,z))\\
\]
By which I spent a significant portion of one of my reports discussing taking fourier transforms on these operations. By which, every semi-group in this form can be written:
\[
\begin{align}
y(t,z) &= g(t+g^{-1}(z))\\
f(z) &= \frac{d}{dt}\Big{|}_{t=0} y(t,z)\\
\end{align}
\]
Where we can take fourier transforms across these objects, by writing:
\[
\mathcal{F}\{y(p(t),z)\}(\xi,z) = g( \int_{-\infty}^\infty p(t)e^{2 \pi i t\xi}\,dt + g^{-1}(z))\\
\]
Where this is compositionally linear, in the sense that:
\[
\begin{align}
\mathcal{F}\{y(p(t),z)\}(\xi,z) &= h_1(\xi,z)\\
\mathcal{F}\{y(q(t),z)\}(\xi,z) &= h_2(\xi,z)\\
h_1(\xi,h_2(\xi,z)) &= h_1 \bullet h_2 \bullet z = \mathcal{F}\{y(p(t) + q(t),z)\}(\xi,z)\\
\end{align}
\]
This operation is invertible in many circumstances. For example, the fourier transfrom on the semigroup induced by \(f(z) = z^2\), is given as:
\[
\frac{1}{\frac{1}{z}-\int_{-\infty}^\infty p(t)e^{2 \pi i t\xi}\,dt}\\
\]
Where additionally, We have the convenient identity:
\[
\frac{d}{dt} y(p(t),z) = \frac{d}{dt}\frac{1}{\frac{1}{z}-p(t)} = p'(t) f(y(p(t),z)) = p'(t)y(p(t),z)^2\\
\]
And we have a relationship between Separable cases and semi-group cases. This is largely aside to your discussion, but MphLee and I wanted to rigorize a lot of this discussion, as it was mostly only in theory form. And I had no obvious way of solving/numerically evaluating for this Fourier transform in a general scenario.
And this is expressible through "compositional integrals" (which is kind of just a fancy way of doing Euler's method but in the complex plane, and only for holomorphic functions).
I'm very interested by this, especially if we can tie it into mellin transforms; and Borel sum the logit.
Anyway, didn't mean to detract from the discussion at hand. This seems very promising though!!! Lol, this makes me think I might be able to find a practical expression for this "semi-group action fourier transform"--rather than the algebraic nonsense I currently have, lol.
Posts: 1,214
Threads: 126
Joined: Dec 2010
08/27/2022, 01:18 AM
(This post was last modified: 08/27/2022, 03:24 AM by JmsNxn.)
Okay, so I had a very productive day. I sat down and reread Milnor's treatment of Ecalle's construction, to see if there's anything to take from it, but unfortunately not--other than some keywords that helped me on my search. I then went on a search for Fatou coordinates/ and borel summations. I came across a bunch of papers, and much of them ended up being about tangentially related things, but two papers really shone and are definitely relevant to our discussion here.
To begin, let's assume we have a function:
\[
f(z) = z + z^2 + o(z^2)\\
\]
Where the value \(2\) can be replaced with \(n > 1\) but requires more work; and we can add a coefficient here \(az^n\)--but is unnecessary to this explanation.
Then the attracting petal of \(f\) is centered on the direction \(-1\); I.e. \(f^{\circ n}(z) \to 0\) for \( z \in (-\delta, 0)\). If we take the change of variables:
\[
F(w) = \frac{-1}{f(-1/w)}\\
\]
Then this function is holomorphic for \(- \tau < \arg(w) < \tau\) for \(|w| > R\) for large enough \(R\). But aditionally, the Abel function \(\alpha\) of this \(F\) satisfies the expansion:
\[
\begin{align}
\alpha(F(w)) &= \alpha(w) + 1\\
\alpha(w) &= w - A \log(w) + \sum_{k=0}^\infty b_k w^{-k}\\
\end{align}
\]
And, wait for it...
\[
b_k = O(c^kk!)\\
\]
For some \(c > 0\). Which means ITS BOREL SUMMABLE!
The first reference is a beast of a paper--largely laying out all the formal manifestations of borel sums--and goes on to talk about germs and a lot of Ecalle's work. You don't have to read this paper, but I thought I'd put it here, as it does a great job of describing the state of affairs of borel sums.
https://hal.archives-ouvertes.fr/hal-008...nt#cite.LY
The second reference is a holy grail. It explicitly proves the above expansion, and proves that the coefficients \(b_k\) are of "Gevrey class 1"--which is their way of saying the above bound. Which is the equivalent of saying that it is Borel summable.
https://arxiv.org/pdf/1108.2801.pdf
I know this doesn't prove anything about the logit per se. But if we use Bo's above transformation rules--we can relate one borel summable series to the other (it would just be a couple of differentials, no biggie, shouldn't upset the expansion). And it would add the appropriate bound to bo's expansion above.
We'd just be writing:
\[
f^{\circ t}(z) = \alpha^{-1}(t + \alpha(z))\\
\]
And:
\[
\alpha'(z) = 1 + \frac{A}{z} + \sum_{k=1}^\infty c_k z^k
\]
Where \(c_k\) is of Gevrey class 1.
Where \(\alpha(z)\) has a borel summable series at \(z=0\), and this would translate to a borel summable series because \(\alpha^{-1}\) should be regular enough at \(t+\infty\). Actually, a lot of this stuff is covered about 20 pages into the first reference! OH YA!
This won't prove the nice sinusoids that you guys are showing. But It should show we have the Borel sum bound! OH YA! I expect the sinusoids are off shoots of \(n\) and \(a\) where in the general case:
\[
f(z) = e^{2 \pi i k/q} z + az^{n} + o(z^n)\\
\]
And we get a different abel function about each attracting petal (rather than there just being one attracting petal). But they are heavily related to orbits that look like \(f^{\circ m}(z) \sim e^{2 \pi i km/q}m^{-1/(n-1)}\); which would partially explain the sinusoidal behaviour. Especially for half iterates.
EDIT:
Since, Bo and Tommy and Gottfried are in bed, I thought I'd add some more thoughts.
We are definitely going to see a lot of wave like properties, and from there, you can expect the coefficients follow like sin's coefficients--i.e. follow a wave themselves. \(\sin(z)\) is a wave, and \(\frac{d^t}{dz^t} \sin(z) = \sin(z+\frac{t \pi}{2})\) is a wave. This works out pretty similarly for any wave like function; its fractional derivative is a wave like function; therefore so are its coefficients. So I'm less and less surprised by the sinusoidal nature of the coefficients. But I don't know how to explain it properly.
This will also tie in very well with the mellin transform version of this result. \(e^z -1\) was a very nice function that had very nice behaviour in the left half plane. The thing is, more generally we have to reduce this into a sector. And by which, we have to set up the mellin transforms differently. But in the same breath, it will even further argue the wave like nature. Any thing that's mellin transformable, is really a codified way of fourier transformable. And these things are always waves.
I'd also like to add that I mean waves as a quantum physicist means waves, some \(L^2\) integrable constructions. Which quite literally just look like waves, lol.
I really want to understand the wave like structure. Especially now, because of this deep dive.
This is the problem that just keeps on giving!
I'll make another post, from here, which attempts to put all of this together in a more firm manner. Can't tonight, need to think about this more, but I'll separate from bo's thread from here!
Posts: 1,631
Threads: 107
Joined: Aug 2007
08/27/2022, 07:41 AM
(This post was last modified: 08/27/2022, 10:56 AM by bo198214.)
(08/27/2022, 01:18 AM)JmsNxn Wrote: The second reference is a holy grail. It explicitly proves the above expansion, and proves that the coefficients \(b_k\) are of "Gevrey class 1"--which is their way of saying the above bound. Which is the equivalent of saying that it is Borel summable.
Yeah, that's the thing ... I recently always had a tab open in my browser with this MO article
Does the formal power series solution to f(f(x))=sin(x) converge?
Actually I went there by incident. Then and when I was musing about was written there.
Will Jagy directly asked Ecalle and he told him that it is Gevrey class 1/p (where p - Ecalle calls it valit(f) - is the index up to which the coefficients of f equal the ones of the identity function).
But I didn't make the connection the growth of the coefficient - because I simply didn't know&read about Gevrey classes!
Great dig, James!
So is there something like a radius of convergence for Borel-summations? How far from the fixed point would it converge/give correct values?
And then I think it would give the correct solution for each petal, right James?
EDIT: It would be really interesting where the break between the petals is when one does the Borel summation ...
|