Half-iterate exp(z)-1: hypothese on growth of coefficients
#11
(07/22/2022, 02:57 PM)Gottfried Wrote: Yes, for the factorial-type series we have the Borel-summation, or even the iterated one, as K. Knopp writes it. Knopp distinguishes between Borel's integration-based method and the "simple" method (under the same name Borel).  
My problem so far was that I didn't see anywhere a reliable estimate of the coefficients growthrate. so I had to explore this myself, arriving at the guess for a limiting function A(k) as mentioned in my first post in this thread - which of course were a basis for that Borel (or my experimental) method. (I don't remember why, but once in the meantime since I had the impression, the growthrate were more than hypergeometric, and thus Erdös's statement would have made sense. But I couldn't after years not reproduce the reason why I thought so).

I would like to have your (or someone's else) derivation to fix the case for sufficiency Borel-summation for some citeable statement - but as well if even it is only for the better statement in my webspace on tet.-docs... :-)

Hope you've had nice dreams - and read you at sunday -

Gottfried


Okay so first of all I want to clarify that this is for the divergent series developed to the left of \(e^{z} - 1\). This is tricky to describe but I'll give a crack at it.



We can develop a holomorphic function \(g(z)\) which is the functional square root of \(e^{z} - 1\) on the disk:



\[

\mathbb{E}_\delta = \{z \in \mathbb{C}\,|\,|z+\delta| < \delta\}\\

\]



This function is holomorphic, and within the left petal of \(e^{z} - 1\), and therefore:



\[

g(g(z)) = e^{z} -1\,\,\text{for}\,\,z \in \mathbb{E}_\delta\\

\]



Shrink \(\delta\) if necessary, but always possible.



Then:



\[

g(z) = \sum_{k=0}^\infty g_k(\delta) (z+\delta)^k\\

\]



Clearly this series diverges at \(z=0\) as it's on the boundary of \(\mathbb{E}_\delta\). But this doesn't stop us from formally rearranging the series. So... formally rearrange it:



\[

g(z) = \sum_{k=0}^\infty d_k z^k\\

\]



This series is divergent, but, under a formal manipulation:



\[

g(g(z)) = \sum_{k=1}^\infty \frac{z^k}{k!}\\

\]



This is because \(g\) satisfies this equation on \(\mathbb{E}_\delta\), and so, just as well do the coefficients. And thereby, formally rearranging the series produces the same result (we can further justify this using Abel summation).



Now, I'm going to be a tad lazy, but:



\[

g(z) = \sum_{k=0}^\infty g_k(\delta)(z+\delta)^k = \sum_{k=0}^\infty g_k(\delta)\delta^k \sum_{m=0}^k \dbinom{k}{m}\frac{z^m}{\delta^m}\\

\]



We want to write this as a power series in \(z\), trouble is, thanks to Taylor's theorem this rearrangement diverges. BUT! the rearrangements are all abel summable, because \(g(z)\) and all of its derivatives, are abel summable at \(0\).



Now we can let \(d_k \mapsto \frac{d_k}{k!}\) and let \(\delta\) be arbitrarily close to zero. Well then we get something that looks a lot like this:



\[

Bg(z) = \sum_{k=0}^\infty \frac{d_k}{k!} z^k \approx \sum_{k=0}^\infty g_k(\delta)\delta^k \sum_{m=0}^k \frac{1}{m!(k-m)!}\frac{z^m}{\delta^m}\\

\]



It'll be off by about \(O(\delta)\) for \(z\approx 0\).



Now the goal is to show that \(Bg(z)\) has a non-zero radius of convergence, and hence, Borel summable.



But, I mean, just by this approximation, you have that the right side is an entire function--this is because a holomorphic function \(f\) has an entire Borel sum.



Here's the tricky part, that I'm not certain how to pull off. I thought I could do it easier before, I'll probably have to think about it more. But if you take \(|z| < \epsilon\) for arbitrarily small and let \(\delta \to 0\). you should definitely get convergence.



I'm going to have to hunt a source to prove this though. But this is actually very very similar to the Euler case, where you are making a divergent series at a boundary point.



The function \(g(z)\) is holomorphic on a large part of the left half plane, with a divergence at \(z=0\), very much like Euler's case. And this is a general rule that these coefficients must grow like \(k!\) or \(1/\epsilon^k k!\) in our case.



Let me hunker through some textbooks, I know I've seen this before.


Another thought I had, is to just take the Euler route and just start rearranging series.

Let's write:

\[
Bg(tz) = \sum_{k=0}^\infty g_k(\delta)\frac{(tz+\delta)^k}{k!}\\
\]

This is an entire function.

Then:

\[
\lim_{\delta \to 0}\int_0^\infty e^{-t}Bg(tz)\,dt = \sum_{k=0}^\infty g_k(0)z^k\\
\]

This should converge for a significant portion of \(\Re(z) < 0\), and would be an integral representation of the Abel function. EDIT: IT CONVERGES FOR \(\Re(z) < 0\)!

I'm not going to work this out in \(\epsilon/\delta\) though, let's just stick to Euler manipulations for this, lol.
Reply
#12
Im sorry I have no precise answers.

And what I say may be already well known.

But maybe the answer lies in these related links and their sublinks :

https://math.stackexchange.com/questions...e-of-sinhz

https://mathoverflow.net/questions/45608...6765#46765

Then again Gottfried and Sheldon were already there so this might not be so helpful.

But they talk about Gevrey classes and how it should be relevant.
Maybe that is key !?

***

I also want to make the ( annoying ? ) comment that it is weird this idea of asympotics and divergent summation ... we know it is pretty close to linear ...

So Im not really sure that idea is solid.

Also divergent sums and asymtotics are usually ( ALWAYS ?! ) used for analytic continuation BUT here we have radius 0 and even 2 distinct solutions based on the direction we travel.

Also analytic continuation , divergent summation etc assumes the function is equal to its taylor series ...

But WHAT DOES EQUAL TO ITS TAYLOR SERIES MEAN FOR RADIUS 0 ??

Is that even a meaningfull concept ??

Does a radius 0 taylor series even describe/imply a unique function ? Or just a unique type of singularity ?

Most methods, proofs and operators from complex analyis assume an analytic space so this is really an issue tool-wise and proof-wise.

***

What actually makes the derivatives of a half-iterate grow faster than other derivatives of other half-iterates of other functions ??

Do the derivatives of half-iterate of exp(x) - 1 grow faster than those of the half-iterate of sinh(x) ??
Why or why not ?

***

How are the derivates approximating exp(x) - 1 ?

I mean 

the 0.9999 iterate of exp(x) - 1 should give derivatives closer to those of exp(x) - 1 ... yet radius remains 0 ...

very confusing.

***

what happens when we compare the taylor of the 0.499 iterate with that of the taylor of the 0.5001 iterate ?
Is the average that of the 0.5 iterate ??

Does the 0.6 iterate divided by the 0.4 iterate have a positive radius ??

still very confused.

***

I thought about associating it with a taylor that has similar growth but nonzero radius.

so if the half -iterate of exp(x) - 1 = sum t_n x^n

then study sum t_n u_n x^n ( possible with nonzero radius).

ofcourse if we can prove t_n u_n with a given u_n has nonzero radius we found a bound on t_n.

I have a theorem that might help ... however it is ofcourse actually meant for analytic functions , so Im unsure if it applies here !!??!!

anyways here it is :

( and this can MAYBE be used to transform the given taylor series in a related taylor series with positive parameters , which kinda resembles fake function theory ideas , in particular if we also manage to get the radius to infinity. Which was the reason I came with the idea/theorem in the first place btw. )


Let f(x) = sum f_n x^n
Let g(x) = sum g_n x^n 

then 

T(x) = sum f_n g_n x^n  

2 pi T(x) = integral from 0 to 2pi of f( sqrt(x) exp(i t) ) g( sqrt(x) exp(- i t) ) dt

...

Making that integral work and practical is another matter , its just a starting idea.  
Does the formula hold if f(x) or g(x) have radius 0 and when and iff x goes to infinitesimal or 0 , and if so is that what it means to " equal its taylor series , while having radius 0  " ??


How about perturbation theory ?

should we study exp(x) - v 

where v goes to 1 ?

that gives us 2 fixpoints when we go from v = 2 to v = 1 ....


Now many say the radius is the problem , but the perturbation theory IMO suggests one could also say the radius is not the problem but because the lack of closed form or asymptotic for half-iterates is an issue.

studying half-iterates at the two fixpoints of exp(x) - v and taking the limit might not be easier ??


*** 

another idea is to use different series expansions for the half-iterate of exp(x) - 1.

such as 

sum a_n x^(b_n)

for real a_n and real b_n.

***

I also wonder about using the equation

define ( not trying to solve ) : g(g(x)) = exp(x) - 1.

try to solve 
g( exp(x) - 1 ) = exp(g(x)) - 1.

where one side plugs in exp(x) - 1 in its taylor and the other takes the exp of the whole taylor.

***

Isnt this thing in the books of dynamics , divergent series or taylor series ?
It seems such a cruxial intuitive question.

Then again the Imo very intuitive idea of fake functions also did not result in much related books found.

I consider this a very important topic !!

kudos and greetings for all the ones into this question !

regards

tommy1729
Reply
#13
oh and ofcourse there is also the following idea :

define g(g(x)) = exp(x) - 1.

solve in the usual way at the parabolic fixpoint ( from julia )

now take the taylor expansion at x = 1 

so we arrive at

g(x) = sum a_n (x-1)^n

where the a_n are from the taylor expansion at 1.

Now expand ( use binomium ) to get 

g(x) = sum a_n (x-1)^n = sum b_n x^n

this might help to prove the bounds.


Not 100 % sure this works though since we are at the edge of convergeance ...

A combination with the integral idea from above might be fruitful.


regards

tommy1729
Reply
#14
(08/06/2022, 09:56 PM)tommy1729 Wrote: (....)
How are the derivates approximating exp(x) - 1 ?

I mean 

the 0.9999 iterate of exp(x) - 1 should give derivatives closer to those of exp(x) - 1 ... yet radius remains 0 ...

very confusing.

***

what happens when we compare the taylor of the 0.499 iterate with that of the taylor of the 0.5001 iterate ?
Is the average that of the 0.5 iterate ??

Does the 0.6 iterate divided by the 0.4 iterate have a positive radius ??

still very confused.

***
Hi Tommy -

this has been subject of some of my first ( ahem: most naive ;-) ) explorations at all.    
I have two types of views into it:

 a) first simply compare the numerical values of the coefficients of some fractional iteratives, up to index 64 or 128. It shows, that for fractional iteration heights the index \( k \) of coefficients from where on the coefficients begin to diverge (let's call this \( \kappa \) ), moves to a small value when \( h \) moves from integer to half-integer height. It allows the surprising formulation, that the iteration with integer \( h \) simply moves \( \kappa \) to infinity, or \( \lim_{\text{frac} (h) \to 0} \kappa = \infty \) .      
Some tabulations of that coefficients are here: tabulation1 and tabulation2 .

b) Another view in the characteristic of the growthrate of the coefficients occuring with fractional iteration heights are the following plots (below). Here I show curves for that tendencies, which I thought might even be more suggestive and perhaps helpful for to get a good idea how to formalize the growthrate depending on the distance of the fractional \( h \) from the integer values. See that plots at the end of this post.

Additionally - with this I thought to even have found a matrix-transformation for the powerseries-coefficients which produces a convergent series for the fractional iterate when also its argument \( x\) is transformed accordingly (using the matrix of Stirling numbers 2'nd kind). But I couldn't make this "fix" to a really safe conjecture/separate statement/essay. So this is still open and someone else might step in and see whether it is worth to explore this further. See the plots (there are also a couple of other bases than simply \( \exp(x)-1 \) tested)  (see also two plots of my essay at the end of this post)

So while both viewpoints gave interesting insights I had no further attempt (until 2016 with my growthrate conjecture recited in this thread) to formulate an upper-bound-conjecture for the growthrates of that coefficients - just observed, they might simply "hypergeometric" (as Euler christened this) - and "at least": meaning not knowing whether perhaps more.      
- - - - - -

Your keyword "Gevrey"-(class) might give the key to further leading reading here. Will Jagy in MO/MSE brought this into the discussions sometimes, but my attempt to read/understand about it had too little determination to really learn. So this is still open (and likely shall stay so, for my part).   

However, I would really like it, if my mentioned explorations could be "shoulders-of-dwarfs" ;-) on which some genius may take stand... Let's see...

Gottfried


- - - - - - - -  - -- - - - -
      
Here the attempt of a matrix-transformation which I gave the provisorical name "Stirling-transformation". The powerseries seem to transform to something that has nonzero radius of convergence:
   
Gottfried Helms, Kassel
Reply
#15
Tommy, you know I love you. And I respect your opinion all the time. But there is a small difference in how Gottfried and I are referring to the asymptotic series. I'd just like to highlight it for you, and from what I gathered by your questions.

First of all \(g(g(z)) = e^{z}-1\) is a holomorphic function for \(\Re(z) < 0\). And equally there exists a solution \(g^*\) for \(\Re(z) > 0\). Thing is, you can't make an asymptotic series for \(g^*\), you can for \(g\). If you take \(g^{-1*}\) then you can make an asymptotic series to the right.

\(g\) is holomorphic in a half plane, and is unique; if you submit yourself to Ecalle's construction. You can modify this solution, yes; but Milnor describes Ecalle's uniqueness very well. So let's take \(g\) unique as Ecalle said.

Then Gottfried's series:

\[
|g(z)-\sum_{k=0}^N d_k z^k| < \vartheta |z|^{N+1}\\
\]

Where we can choose the constant \(\vartheta\) pointwise primarily. This series expansion is essentially what Euler did with the series expansion:

\[
h(z) = \sum_{k=1}^\infty (-1)^k k! z^k\\
\]

And the function:

\[
h(z) = \int_0^\infty \frac{e^{-pz}}{1+p}\,dp\\
\]

So.... we are trying to guess that \(d_k = O(k!)\), which Gottfried's numerical evidence is suggesting. And which, what looks like the math is suggesting. I'm just having trouble hammering a couple of the nails in, but I see in a moral sense why it's happening. I'm missing something though.

Happy to have you jump in here, Tommy

Tongue
Reply
#16
Yeah Gotfried at first sight about the a_k terms I thought about Borel summation, and I was also interested into divergent sums
I once also stucked in the summing-up terms d_n=O(exp(exp(n))) which is very much faster growing than O(n!) and any O(n!^k), I studied O(exp(exp(n))) because of the infinity q-pochhammer function \((q;q)_\infty\) which has nice expansions but O(q^n^2) cannot sum and even analytically extended to \(norm(q)>1\). I was so convinced about the contour integral method could sum diverge summations up, this may help idk
This may belong to the barren land of modern maths, but still I think it can guarantee a finite value as the sum, for example I used to extend the function
\(f(z)=\sum_{n\ge0}{z^{2^n}}\), firstly I used contour integration and it fits well, and it did also work outside \(\norm(q)\le1\), and also I used the property \(f(z^2)=f(z)-z\) to discover it's multivalued, and it might be a success?
Regards, Leo Smile
Reply
#17
(08/10/2022, 02:12 PM)Leo.W Wrote: Yeah Gotfried at first sight about the a_k terms I thought about Borel summation, and I was also interested into divergent sums
I once also stucked in the summing-up terms d_n=O(exp(exp(n))) which is very much faster growing than O(n!) and any O(n!^k), I studied O(exp(exp(n))) because of the infinity q-pochhammer function \((q;q)_\infty\) which has nice expansions but O(q^n^2) cannot sum and even analytically extended to \(norm(q)>1\). I was so convinced about the contour integral method could sum diverge summations up, this may help idk
This may belong to the barren land of modern maths, but still I think it can guarantee a finite value as the sum, for example I used to extend the function
\(f(z)=\sum_{n\ge0}{z^{2^n}}\), firstly I used contour integration and it fits well, and it did also work outside \(\norm(q)\le1\), and also I used the property \(f(z^2)=f(z)-z\) to discover it's multivalued, and it might be a success?

Ironically, Leo. I wanted to approach this problem using work from Remmert's two part textbooks on complex analysis; which works on everything in analysis. There's a section on summing Taylor series on the boundary of the domain of holomorphy. And it describes (not siegel disks, but they call em something else, which are like siegel disks) how you can sum taylor series on the boundary of the domain of holomorphy. As odd as it sounds, your problem will behave way better than Gottfried's. I can solve your problem using the textbook, not Gottfrieds.

Your addition of complexity, actually satisfies a more regular structure. There is no divergent series! and it's easy to prove!

Gottfried's series is just on the boundary. And I mean it's the boundary of the boundary. And we should at least make a guess of \(O(k!^N)\) for some \(N > 1\). But it's definitely \(1+\delta >N > 1\), which can be better expressed as \(O(c^k k!)\) for this case in particular.


This problem is a very difficult problem, exactly because it depends on being a boundary problem in more ways than one. And I think we're almost there.

I should add, that Remmert has a section on summing the very function you mention: \(\sum_{n=0}^\infty z^{2^n}\).
Reply
#18
(08/10/2022, 02:12 PM)Leo.W Wrote: Yeah Gotfried at first sight about the a_k terms I thought about Borel summation, and I was also interested into divergent sums
I once also stucked in the summing-up terms d_n=O(exp(exp(n))) which is very much faster growing than O(n!) and any O(n!^k), I studied O(exp(exp(n))) because of the infinity q-pochhammer function \((q;q)_\infty\) which has nice expansions but O(q^n^2) cannot sum and even analytically extended to \(norm(q)>1\).
(...)

 Hi Leo -
  I liked to be reminded of that function-examples q-pochammer -there has been a question in I think MO (perhaps MSE) where this was discussed... got many "reputations" for it if I remember correctly :-) .
But well, in general I don't really know what to say after reading your post, I should go into it next week again.
Perhaps I made myself not clear so far, I'll try again.    

I had two problems in the question:   

 - what is the growthrate of the coefficients     

 - if it is more than hypergeometric, then how to sum that divergent series.   

But after my guess/conjecture for the limiting function for the coefficients \( c_k \) being \( O( (k-3)!/(4 \pi) ^k) \)  (which is even smaller than that \( O(k!) \) which is requiring  Borel and thus we had no further question of summability of this), it is only open, how to show what is really its growth rate?        

I might -using some hours of cpu-time- find the next 1024 coefficients, but this would not remove the duty to find some analytical argument for some upper bound of coefficient \( c_k \) for any \( k \) given.
I could not yet find any argument (which surely must be derived from the computation scheme of the coefficients) for that growthrate so far...   

But thanks for taking some time for this (and plz excuse if I missed something in your post) ! -

Gottfried
Gottfried Helms, Kassel
Reply
#19
(08/11/2022, 03:28 AM)JmsNxn Wrote:
(08/10/2022, 02:12 PM)Leo.W Wrote: ...

Ironically, Leo. I wanted to approach this problem using work from Remmert's two part textbooks on complex ....
I should add, that Remmert has a section on summing the very function you mention: \(\sum_{n=0}^\infty z^{2^n}\).

Not to judge, but any sum with terms with \(O(n!^k)\) can converge after at most (k+1)-times Borel summation, bro, so it's not the big deal.
This is how we deal with generalized Hypergeometric functions like, \(f(z)=\sum_{n=0}^\infty{n!^2z^n}\), can converge after 2 borel summation procedure.
About the function \(\sum_{n=0}^\infty z^{2^n}\), it's hadamard's problem and called lacunary functions, I mentioned it because I thought it'd be a great example for a damn fast growing func, because \(O(a^{2^n})=O(e^{ln(a)2^n})>>O(e^{knln(n)})>O(n^{kn})>O(n!^k)\) for any k since \(O(2^n)>>O(knln(n))\)
And then \(O(a^{2^n})\) is not Borel summable even after k times, but still summable by contour integrals with Residue theorem. And hence we can sum \(O(^n10)\) like \(1-10+10^{10}-10^{10^{10}}+\cdot\)
Sad My focus was on the summation and at least now have no idea bout coefficients'.
Regards, Leo Smile
Reply
#20
if i recall correctly , convergeance on the boundary ( in general ) can be an undecidable problem even if we know boundaries on the taylor coefficients.

That would be related to number theory or analytic number theory usually ... or always ?

forgot the details ...

just a thought.


regards

tommy1729
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  logit coefficients growth pattern bo198214 21 24,319 09/09/2022, 03:00 AM
Last Post: tommy1729
Question Repeated Differentiation Leading to Tetrationally Fast Growth Catullus 5 8,110 07/16/2022, 07:26 AM
Last Post: tommy1729
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 3,304 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 5,165 09/09/2019, 10:55 PM
Last Post: tommy1729
  Approximation to half-iterate by high indexed natural iterates (base on ShlThrb) Gottfried 1 7,590 09/09/2019, 10:50 PM
Last Post: tommy1729
  Between exp^[h] and elementary growth tommy1729 0 5,212 09/04/2017, 11:12 PM
Last Post: tommy1729
  Does tetration take the right half plane to itself? JmsNxn 7 23,991 05/16/2017, 08:46 PM
Last Post: JmsNxn
  Half-iteration of x^(n^2) + 1 tommy1729 3 14,151 03/09/2017, 10:02 PM
Last Post: Xorter
  Uniqueness of half-iterate of exp(x) ? tommy1729 14 53,250 01/09/2017, 02:41 AM
Last Post: Gottfried
  Taylor polynomial. System of equations for the coefficients. marraco 17 52,406 08/23/2016, 11:25 AM
Last Post: Gottfried



Users browsing this thread: 3 Guest(s)