Borel summation
#1
So, guys, here you have to help me a bit.
I am a complete newbe to any kind of divergent summation and integral transforms.
So I was just reading a bit on Wikipedia and tried to apply the Borel summation on the half-iterate of \(e^x-1\).
I have the half iterate series \(h\) then and Wikipedia says to take
\[ \int_0^\infty e^{-t} \sum_{n=0}^\infty \frac{h_n}{n!} (tx)^n dt\]

This would make sense if the function \( e^{-t} \sum_{n=0}^N \frac{h_n}{n!} (t)^n \to 0\) for \(t\to \infty\).
But this does not happen, from a certain point the function goes rapidly to \(\infty\) and the higher I choose N the earlier this happens!
       
I also read something that there might be a singularity and in this case one just uses the analytic continuation along the real axis ....
But I mean numerically continue an analytic function is quite hard to do and makes no sense here.
However if I just integrate to the lowest point, I also get reasonable results:
\[h(x)=\int_0^{13.5/x} e^{-t} \sum_{n=0}^{200} \frac{h_n}{n!} (tx)^n dt\]
\(\left|h(h(x))-(e^x-1)\right|\) varies up to \(10^{-3}\) on (-0.5,0.5).

But is that how you do Borel summation???
Reply
#2
K. Knopp explains this with the idea of comparision of the transformed series to be summed with the transformation sequence without the series coefficients.

So he says: first, your original series might be notated by \( \sum_{k=0}^\infty a_k \).
With this, you define the sequence of partial sums by \( s_k = \sum_{j=0}^k a_j \) .  
Many of the classical summation procedures work on the basic concept of averaging the partial sums \( s_k \) (the simplest ones Hölder and Cesaro summation) .         
The Borel-summation can then be understood as the transformation of the \( s_k \) values and the "averaging" of that partial sums by transforming and comparing:
\[ \mathcal B  \quad s = \lim_{x \to \infty}  { \sum_{k=0}^\infty { x^k \cdot s_k \over k! }\over \sum_{k=0}^\infty { x^k  \over k! } } \]
If the values \( a_k \) have factorial growthrate, say \( \mid a_k \mid = k! \) , then as well the \( s_k \) have this growthrate. Now in the numerator-sum we take the transformed \( s_k \) which means they get divided by \( k! \) and thus the series in the numerator has now simply geometric growth. But the partial sums \( s_k \) are not simple factorials, so the resulting transformed series is not exactly geometric, and can thus not be replaced by its known finite value. So we are here at a dead end.   

But Knopp explains then, that the Borel-transformation need not only transform by division by \( k! \) but we may introduce a further factor, the "order" of Borel summation, say \( r \). 

Then the transformation looks like
\[ \mathcal B_r \quad s = \lim_{x \to \infty}  { \sum_{k=0}^\infty { x^{rk} \cdot s_k \over (rk)! }\over \sum_{k=0}^\infty { x^{rk}  \over (rk)! } } \]
and with \(r=2\) the sum in the numerator as well as in the denominator becomes convergent for any \( x \), can be evaluated, and then the numerator-sum is "averaged" by the denominator-sum.

Practically using software we deal with truncated series; but the Borel-sum requires that in the terms of the series as well as the coefficient \( x^k \) or \( x^{rk} \) is present, and we know, that for large \( x \) the exponential series increases to huge values before the influence of the denominator begins to diminuish and suppress that \(x^k\) term. So we cannot really compute with arbitrary large \( x \) (which would be required) but only such large values, that our finitely long exponentialseries has dominated that \(x^k \) or \( x^{rk}\) values. So the implementation of this version of the Borel sum is never really perfect

For instance, to sum the halfiterate of the \( \exp(z)-1 \) I needed order \( r=2 \) but could not use \( x \gt 70 \) say, because of having only 256 or 512 transformed terms. Having 1024 terms, one could insert \( x \) with value of around 80 or 100 and can thus achieve more digits precision.                 

- - - -

P.s.: Practically we need only the proof on concept. After we know that Borel-summation can manage the current growthrate of the \( a_k \) or more precisely that of the \( s_k \) we had of course the possibility not to use \( z=1 \) but the -say- 40th iteration towards 0 by negative height getting \( z_{-40} \lt 1e-10 \), use then for the partial sums \( s_k = \sum_{j=0}^k z_{-40}^k \cdot a_k \) and get nearly exact values for the Borel-sum because \( x \) can now go much farther to \( \infty \) .   

I add the chapter 13 of the Knoop book (from where I got this) (in its german original) here as pdf
.pdf   Knopp UnendlicheReihen Kap13.pdf (Size: 8.68 MB / Downloads: 537) . The here paraphrased Borel-method is at pg 488.

P.P.s. I have Pari/GP code for testing with that Borel-method, but I think it is too dense to present it at the moment here. I'll see that I can make it better readable and show it then here if needed.
Gottfried Helms, Kassel
Reply
#3
Thank you Gottfried, for giving me some introduction to divergent summation!
(08/28/2022, 09:26 PM)Gottfried Wrote: Practically using software we deal with truncated series; but the Borel-sum requires that in the terms of the series as well as the coefficient \( x^k \) or \( x^{rk} \) is present, and we know, that for large \( x \) the exponential series increases to huge values before the influence of the denominator begins to diminuish and suppress that \(x^k\) term. So we cannot really compute with arbitrary large \( x \) (which would be required) but only such large values, that our finitely long exponentialseries has dominated that \(x^k \) or \( x^{rk}\) values. So the implementation of this version of the Borel sum is never really perfect

For instance, to sum the halfiterate of the \( \exp(z)-1 \) I needed order \( r=2 \) but could not use \( x \gt 70 \) say, because of having only 256 or 512 transformed terms. Having 1024 terms, one could insert \( x \) with value of around 80 or 100 and can thus achieve more digits precision.                 
I think what you describe they call "weak Borel summation" and it seems it has the same issue as the "strong" integral version, that you need to check how big your x or t should grow.

(08/28/2022, 09:26 PM)Gottfried Wrote: P.s.: Practically we need only the proof on concept. After we know that Borel-summation can manage the current growthrate of the \( a_k \) or more precisely that of the \( s_k \) we had of course the possibility not to use \( z=1 \) but the -say- 40th iteration towards 0 by negative height getting \( z_{-40} \lt 1e-10 \), use then for the partial sums \( s_k = \sum_{j=0}^k z_{-40}^k \cdot a_k \) and get nearly exact values for the Borel-sum because \( x \) can now go much farther to \( \infty \) .   

But with this approach we might not need Borel summation at all, we can just take a truncation of the divergent series - say \(h_N(x)\) - and set
\begin{align}
h_{\leftarrow}(x)   &= \lim_{n\to\infty} (f^{\circ -n}(h_N(f^{\circ n}(x))))\\
h_{\rightarrow}(x) &= \lim_{n\to\infty} (f^{\circ n}(h_N(f^{\circ -n}(x))))
\end{align}
where \(f(x)=e^x-1\).
I am not completely sure how big N needs to be, but practically you get high precision with N=20 (I guess N=2 would be the minimum).
It is very similar to the hyperbolic half iterate where you have the formula
\[ \lim_{n\to\infty} (f^{\circ -n}(f'(0)^{\frac{1}{2}}(f^{\circ n}(x)))) \]
where so to say only the first coefficient of the formal powerseries of the half-iterate is taken, but which you could improve by taking more coefficients.
Reply
#4
Upps, crossed your previous posting. Just wanted to show an example of application of the Borel-summation compared with my q&d method result . I'll come to your post later...

.pdf   BorelNoerlundHalfiteratedxp_x_.pdf (Size: 90.23 KB / Downloads: 581)
Gottfried Helms, Kassel
Reply
#5
(08/29/2022, 07:51 AM)bo198214 Wrote: But with this approach we might not need Borel summation at all, we can just take a truncation of the divergent series - say \(h_N(x)\) - and set (...)

Well, something like this has been the reason for my no-continuing with the analysis/application Borel-summation at all: practically not better than my handy matrix-summation method, ready made for use in Pari/GP.        

But I always kept the reserve, that one were assuming the sufficience of the Borel-method, and was only awaiting the working-out of a proof-of-concept (which we seem to have now by James and his literature-dig).
Thus no sophisticated elaborations with tables & plots from my side...

Gottfried
Gottfried Helms, Kassel
Reply
#6
(08/29/2022, 08:35 AM)Gottfried Wrote: practically not better than my handy matrix-summation method, ready made for use in Pari/GP.        

What is that method? Does it also compute regular iteration?
Reply
#7
(08/29/2022, 09:51 AM)bo198214 Wrote:
(08/29/2022, 08:35 AM)Gottfried Wrote: practically not better than my handy matrix-summation method, ready made for use in Pari/GP.        

What is that method? Does it also compute regular iteration?

No. It is just a (parametrizable) summation matrix, say \( M( order, dim=128 ) \) which by  M( 1.4 ,128 )  * A  performs summation (in the Noerlund style) of the fixed sequence in the cells of a column-vector \( A \), taking the first 128 terms, showing the first 128 partial sums (of the transformed sequence in \(A \) ).

If in \( A \) I have geometrical growth, I take the matrix M for the Euler-summation, and if the growth is factorially I use the matrix M for Noerlund-summation (with parameter "order" casewise adapted) and so on...

I have no such matrix M for the Borel-summation because then another parameter (the "x" from the description above) must be taken in respect too ... and too much fiddling ...

Example: my Pari/GP-command PkPowSum( 1.15, 1.0, 64) * Mat( G[1..64] ) uses the first 64 coefficients of the powerseries of \( g(z) \) which are stored in columnvector \( G \), and the PkPowSum() gives the transformation/summation-matrix for the Noerlund-summation with order 1.15 (which should capture sequences in G with at most factorially growth), and in a second window occur the leading 64 partial sums (which I can copy&paste to clipboard for documentation) :

Code:
                      0
  0.50000000000000000000
  0.80142587824806072141
  0.98432556940817345050
  1.0957027560716282926
  1.1636752922666828048
  1.2052223288847365712
  1.2306484763014693961
  1.2462257520559636128
  1.2557788388344492333
  1.2616432434973325406
  1.2652467543462965175
  1.2674631429332411599
(...)
  1.2710274138627236401
  1.2710274138726573085
  1.2710274138789613347
  1.2710274138829668630
  1.2710274138855097373
  1.2710274138871189746
  1.2710274138881450066
  1.2710274138888130869
  1.2710274138892329362
  1.2710274138894639965
  1.2710274138896221822
  1.2710274138898144174

Using the functional equation with \( z_{-40} = f°^{-40}(1) \approx 0.048759544221234150945 \)
then I can even use the simpler Euler-sum matrix with little order 0.25 partsums = ESum(0.25,64) * dV(z_40,64) * Mat( G[1..64] )   getting
Code:
                        0
  0.039007635376987320756
  0.047189561356810783097
  0.048903262872140813102
  0.049261713746775914190
  0.049336595521074853097
  0.049352220101960622416
  0.049355476647235224581
  0.049356154679576723518
(...)
  0.049356332694373519259
  0.049356332694373519273
  0.049356332694373519276
  0.049356332694373519276
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
  0.049356332694373519277
and the retransformation, iterating each of this partial sums 40 times forward  by     Mat( vectorv ( 64,r,  it ( partsums [r,1] , 40 )))  gives then the approximation-protocol
Code:
                0.E-808
  0.17007875242702049417
  0.62004003166073784848
  1.0553493687949762372
  1.2198838762606499392
  1.2600508274304994244
  1.2687263993243629131
  1.2705478466365730441
  1.2709276617668611349
  1.2710066884474990569
(...)
  1.2710274138899515195
  1.2710274138899515210
  1.2710274138899515213
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214
  1.2710274138899515214

- - - -
Just to see "the workflow":

image    
Gottfried Helms, Kassel
Reply
#8
(08/29/2022, 10:31 AM)Gottfried Wrote: No. It is just a (parametrizable) summation matrix,
But in the end it is regular iteration, I think.

(08/29/2022, 10:31 AM)Gottfried Wrote: I have no such matrix M for the Borel-summation because then another parameter (the "x" from the description above) must be taken in respect too ... and too much fiddling ...
Yeah, exactly. For the integral method, I numerically searched for the minimum of the corresponding function. But even there you have to give a good initial guess in which interval that minimum is to find.

(08/29/2022, 10:31 AM)Gottfried Wrote: Using the functional equation with \( z_{-40} = f°^{-40}(1) \approx 0.048759544221234150945 \)
then I can even use the simpler Euler-sum matrix with little order 0.25 partsums = ESum(0.25,64) * dV(z_40,64) * Mat( G[1..64] )
and the retransformation, iterating each of this partial sums 40 times forward  by   [b]  Mat( vectorv ( 64,r,  it ( partsums [r,1] , 40 ))) 

But isn't this quite \(f^{\circ 40}(h_{64}(f^{\circ -40}(x)))\), except that you Euler-summed \(h_{64}\) instead of just truncating the powerseries? This is what I meant - you actually don't need to care much about the power series, but you can get arbitrary precision via the iterations, i.e. just increase the 40, instead of to make efforts about the summation of the 64.
Reply
#9
(08/29/2022, 05:19 PM)bo198214 Wrote: But isn't this quite \(f^{\circ 40}(h_{64}(f^{\circ -40}(x)))\), except that you Euler-summed \(h_{64}\) instead of just truncating the powerseries? This is what I meant - you actually don't need to care much about the power series, but you can get arbitrary precision via the iterations, i.e. just increase the 40, instead of to make efforts about the summation of the 64.

Yes, exactly. I had nothing better to contribute than this (what was already current practice in the forum), and actually I programmed my little tetration-modules in Pari/GP this way as likely everyone else.... (Either using the Schroeder-matrices/-functions or that what I called the "polynomial method" which simply used the diagonalization of truncated Carlemanmatrices - otherway staying in admiration of Sheldon's tool!).    

There has been no analysis about the growthrate of the coefficients in the powerseries for fractional iterations: may be they grow like \( c^{2^n} \) --- OMG - then the use as asymptotic series would have been misleading in the end! (see related discussion in MO https://mathoverflow.net/questions/19866...871#198871 , with example estimate/computation by Robert Israel and the impossibility of assigning a finite value to the series and a follow up by me https://mathoverflow.net/q/201098/7710 ).

Now we know, the series can be summed by Borel-summation; plus we find that the simple asymptotic evaluation produces the same -arbitrarily approximatable- values : so that's what I was missing all the time.  

I guess the MO-question  ( https://mathoverflow.net/q/4347 ) put together by Gil Kalai, and the following discussion, had left this point kept open in some way, has been only more-or-less explicite in this regard. And perhaps deserves now a new bullet point...   an additional answer composed by James...

Gottfried
Gottfried Helms, Kassel
Reply
#10
Hey, Bo!

This is totally my domain. I can't work through all of this today. But I was planning on making a thread specifically about Borel summation of these parabolic iterations. I've just been absorbing this asymptotic expansion and trying to figure out.

But to begin;

If you have \(b_k = O(c^k k!)\); you don't just perform the standard way of Borel summing. So wikipedia won't help you. You have to be more clever. True "borel summing" is very sensitive to how you set up your integrals, and how you set up your sequences. There's no one size fits all manner. Borel sum, is more usually used to describe that they are all analytic continuations of each other. Not necessarily you just plug in the integral.

I'm not surprised your initial attempt failed. Because Euler's form (Gervery class 1) sequences do not converge in the manner you posted. When the coefficients are \(k!\). Again, we have to be much more clever. And largely, it equates to a bunch of esoteric changes of variables.

I will post a much better answer in the coming days. I don't want to tonight, because I'm still making sure I'm doing everything right. But, I will leave you with this tidbit.

If \(b_k = O(c^k k!)\)

Then:

\[
F(x) = \sum_{k=0}^\infty b_k\frac{x^{2k}}{2k!}
\]

Is an entire function; which is bounded by \(e^{b|x|}\)--or something close--for \(b<1\).

From here, you take the modified laplace transform:

\[
**\begin{align}
\int_0^\infty e^{-x} F(\sqrt{t}x)\,dx &= \sum_{k=0}^\infty b_k t^k\\
&= t^{-1/2}\int_0^\infty e^{-ut^{-1/2}} F(u)\,du
\end{align}
\]

Which is just a term wise evaluation of the integral***. This object converges for a specific domain in \(t\)--depending on where the asymptotic expansion works. Just be careful with \(t\).  But this definitely, absolutely, no doubt about it, converges.  Big Grin

**Converges because \(|F(x)| < e^{b|x|}\) for \(b \approx 0\)--of which, we only need \(b<1\).

***
\[
\begin{align}
\int_0^\infty e^{-x} F(\sqrt{t}x)\,dx &= \int_0^\infty e^{-x}\sum_{k=0}^\infty b_k\frac{t^kx^{2k}}{2k!}\,dx\\
&=\sum_{k=0}^\infty b_k\frac{t^k}{2k!}\int_0^\infty e^{-x}x^{2k}\,dx\\
&= \sum_{k=0}^\infty b_k\frac{t^k}{2k!}2k!\\
&=\sum_{k=0}^\infty b_k t^k
\end{align}
\]


This stuff can get really tricky, bo. Especially when you want to expand in series. You have to remember all that Lebesgue stuff, all that interchange of series. And additionally, it doesn't hurt if you remember your Fourier analysis lessons, lol.

I want to make a long thread sort of detailing what I've learned so far. And I just need a bit of time Smile  But I'll be able to make those graphs not diverge, just let me get my notes in order.

This is super interesting to me because "Fractional calculus" + "Parabolic Iteration" has always failed on all my attempts to crack it, and these past few weeks of results on parabolic iteration has opened my eyes as to how to approach it. Cool


You guys might not notice it, but we've artfully used that \(2k!\) dominates \(k!^2\); and we've done two borel sums, by just modifying one variable. Which upon we just use the Mellin transform correspondence. \(\Gamma(z)^2\) looks a lot like \(\Gamma(2z)\), and you can relate these things exactly. That was a lot of Gauss' schtick--at least, anything to do with \(\Gamma\). #EulerDidItFirst

I have 2 weeks of free time no trouble; I am going to explain this much better. Trust me when I say, D. Zagier, and his paper solves all our problems  Big Grin
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Borel summation, Mellin Transforms, Parabolic iteration JmsNxn 5 7,692 09/10/2022, 03:12 PM
Last Post: bo198214
  The summation identities acgusta2 2 12,822 10/26/2015, 06:56 AM
Last Post: acgusta2
  Developing contour summation JmsNxn 3 13,652 12/13/2013, 11:40 PM
Last Post: JmsNxn
  Borel summation and other continuation/summability methods for continuum sums mike3 2 13,870 12/30/2009, 09:51 PM
Last Post: mike3



Users browsing this thread: 1 Guest(s)