Searching for an asymptotic to exp[0.5]
#1
Im searching for an asymptotic to exp[0.5](x).

This is the continuation of the thread http://math.eretrandre.org/tetrationforu...hp?tid=854

where I and sheldon have gone off topic from the OP.

Thus we look for a function with " growth " = 0.5.

Or at least growth between 0 and 1.

Notice f^[n](x) = exp^[m](a^n ln^[m](x))

is not a solution for any m,a.
They all have growth 0.

2 candidates remain :

f1(z) = sum z^n/(n^2)!

f2(z) = ln(z)^e^ln^[3](z)^e^ln^[5]^e^...

We should investigate those perhaps ?

regards

tommy1729
Reply
#2
(05/07/2014, 12:22 PM)tommy1729 Wrote: Im searching for an asymptotic to exp[0.5](x).

This problem will probably keep me occupied for a long time. I need to work on tools for figuring out the growth from the Taylor series, where growth is the limit of slog(f^n(x0))/n (I have some ideas).

I'm also thinking about what the Taylor series for an entire pseudo half iterate might look like, and what bounds can be put on the Taylor series coefficients. The pseudo half iterate of exp should converge with far fewer Taylor series terms than exp(z). half(10000)~=1E22, which is only 10000^5.5 Anyway, I expect this problem will keep me occupied for a long time, but that also means it might take awhile to make any real progress...

edit: Emperical testing suggests that an "entire" pseudo half iterate is very likely possible, with all positive Taylor series coefficients at z=0, and a probable growth value of 0.5, as defined by the "growth" equation. I can post the empirical results for the first 100 derivatives of such a conjectured asymptotic solution later. Each derivative is bounded to a maximum value by a particular value of half(z). I can post more later; still working on how to formalize the definition of the conjectured Taylor series.
- Sheldon
Reply
#3
Perhaps this is trivial to Sheldon but some more comments.

Let p(x) be a polynomial.
The growth of f(x) and the growth of f(x) + p(x) are equal.

This could be intresting.

So the half-iterate of exp(x) and exp(x)+1 have the same growth rate.

SO growth works a bit like the concept of convergeance of a limit.
ONLY the " tail " of the expression matters.
Just like conv(lim a0 + a1 + a2 + a3 + ...) = conv(lim a2 + a3 + ...).

Hence when trying to find an asymptotic to exp^[0.5] we could as well use exp(x)-1+x = 0 + 2x + x^2/2 + ... .

This implies that we can play with the fixpoints and dummy variables.

So we can investigate C0 + C1 x + C2 x^2 + x^3/3! + x^4/4! + ...

I only see advantages for this.

However one disadvantage I see is that this might show that trying to find the half-iterate of exp(x) by taking the half-iterate of its truncated Taylor might not be such a good idea for studying the half-iterate for large x ( works fine for small x under some conditions ).
However thats a bit of topic here, but worth mentioning imho.

A further implication is that we can arrive at usefull results if we set f(0)=0.

I mean C0 + C1 x + C2 x^2 + x^3/3! + x^4/4! + ... with C0 = 0 and C1 > 1.

This allows us to use stuff like carleman matrices for example.

although C1,C2 appear as variables , 2 variables ( or finite ) are quite easy too handle.

This bring me to a few other remarks.

f(x) and f(a x) also have the same growth.
This implies that the Taylor coefficients t_n and a^n t_n imply the same growth rate !!

( Many conjectures about what gives an equal growth rate are possible , but its not immediate which ones are the most usefull )

These were algebraic ideas , but some calculus ideas occur too :

For instance use the Laplace transform instead of a Taylor series.
( with x = exp(-s) )
Then the theorems involving that transform can be used too !

For instance the Laplace transform of f ' (x) is very intresting.

It suggests that G(x) and G(x) * (ln(x)^k) have the same growth even when iterated.

HOWEVER one thing seems to put us on our feet again.

We want all derivatives to be nonnegative.
Its not clear those dummy variables can provide us with a solution that is both entire and has all Taylor coefficients nonnegative.

Why positive ?

Because then the Taylor is dominated by the largest coefficients.
Compare with exp vs 2sinh. The zero's coefficients of 2sinh do not effect its growth much. Because neither has negative coefficients !

This positivity removes unnessary up and down jumps in the sizes of the coefficients. So they are easier to approximate.

Also the positivity gives us insight about the functions values at complex imput... because of absolute convergeance !

I assume Sheldon was aware of all that.
But that information needed to be shared for all.

regards

tommy1729
Reply
#4
(05/08/2014, 04:25 PM)sheldonison Wrote:
(05/07/2014, 12:22 PM)tommy1729 Wrote: Im searching for an asymptotic to exp[0.5](x).
...Emperical testing suggests that an "entire" pseudo half iterate is very likely possible, with all positive Taylor series coefficients at z=0, and a probable growth value of 0.5, as defined by the "growth" equation. .... Each derivative is bounded to a maximum value by a particular value of half(z)....still working on how to formalize the definition of the conjectured Taylor series.
- Sheldon

\( \text{halfassym}(z) = \sum_{n = 1}^{\infty}\frac{x^n}{a_n!} \)
Here, for a_n, factorial is extended to the reals with the gamma(n+1) function. I do not know what the limiting equation for a_n is as n gets arbitrarily large. These values were generated numerically. With 150 Taylor series terms, this series can accurately generate the half iterate for numbers up to 10^12.
Code:
a_1= 1.289368074687
a_2= 2.685542084449
a_3= 4.481892104368
a_4= 6.478743767633
a_5= 8.617330003873
a_6= 10.86883764614
a_7= 13.21549097355
a_8= 15.64480318533
a_9= 18.14765340676
a_10= 20.71664375389
a_11= 23.34605172725
a_12= 26.03107295368
a_13= 28.76759185629
a_14= 31.55218923626
a_15= 34.38183207053
a_16= 37.25395409452
a_17= 40.16617725981
a_18= 43.11650203072
a_19= 46.10306787913
a_20= 49.12423329725
a_21= 52.17844415943
a_22= 55.26438661508
a_23= 58.38072977033
a_24= 61.52640919181
a_25= 64.70028943211
a_26= 67.90141159249
a_27= 71.12888751647
a_28= 74.38183008946
a_29= 77.65944293902
a_30= 80.96100674887
a_31= 84.28582227410
a_32= 87.63323120070
a_33= 91.00261369743
a_34= 94.39338557884
a_35= 97.80499516680
a_36= 101.2369199152
a_37= 104.6886902658
a_38= 108.1598101190
a_39= 111.6498571303
a_40= 115.1583810162
a_41= 118.6850011605
a_42= 122.2293283987
a_43= 125.7909699804
a_44= 129.3696195111
a_45= 132.9648897794
a_46= 136.5764573911
a_47= 140.2040557334
a_48= 143.8473645074
a_49= 147.5060942647
a_50= 151.1799685187
a_51= 154.8686996662
a_52= 158.5720762206
a_53= 162.2898267064
a_54= 166.0217152588
a_55= 169.7675306959
a_56= 173.5270093037
a_57= 177.2999786775
a_58= 181.0862092231
a_59= 184.8855000278
a_60= 188.6976766463
a_61= 192.5225306431
a_62= 196.3598778123
a_63= 200.2095425885
a_64= 204.0713710077
a_65= 207.9451779799
a_66= 211.8308191885
a_67= 215.7281180480
a_68= 219.6369408364
a_69= 223.5570997849
a_70= 227.4885071866
a_71= 231.4309843808
a_72= 235.3843931661
a_73= 239.3486192792
a_74= 243.3235140576
a_75= 247.3089677325
a_76= 251.3048375095
a_77= 255.3110228135
a_78= 259.3273875110
a_79= 263.3538324564
a_80= 267.3902471867
a_81= 271.4365024525
a_82= 275.4924928105
a_83= 279.5581471603
a_84= 283.6333119804
a_85= 287.7179392030
a_86= 291.8118947335
a_87= 295.9151023881
a_88= 300.0274504396
a_89= 304.1488634944
a_90= 308.2792519888
a_91= 312.4185117624
a_92= 316.5665730911
a_93= 320.7233354589
a_94= 324.8887317451
a_95= 329.0626821729
a_96= 333.2451078801
a_97= 337.4359118041
a_98= 341.6350372412
a_99= 345.8424068745
a_100= 350.0579336860
a_101= 354.2815588969
a_102= 358.5132133215
a_103= 362.7528106159
a_104= 367.0003160828
a_105= 371.2556293444
a_106= 375.5186854508
a_107= 379.7894500858
a_108= 384.0678314842
a_109= 388.3537801607
a_110= 392.6472189586
a_111= 396.9481223641
a_112= 401.2563957391
a_113= 405.5719986392
a_114= 409.8948585083
a_115= 414.2249480026
a_116= 418.5621839981
a_117= 422.9065248225
a_118= 427.2579174535
a_119= 431.6162930147
a_120= 435.9816335167
a_121= 440.3538537394
a_122= 444.7329047543
a_123= 449.1187668605
a_124= 453.5113755327
a_125= 457.9106702550
a_126= 462.3166165089
a_127= 466.7291677505
a_128= 471.1482630381
a_129= 475.5738880517
a_130= 480.0059669786
a_131= 484.4444725900
a_132= 488.8893596420
a_133= 493.3405854525
a_134= 497.7981092466
a_135= 502.2618743290
a_136= 506.7318543022
a_137= 511.2080081242
a_138= 515.6902967113
a_139= 520.1786803772
a_140= 524.6731047065
a_141= 529.1735644829
a_142= 533.6799885996
a_143= 538.1923555703
a_144= 542.7106288953
a_145= 547.2347567376
a_146= 551.7647258028
a_147= 556.3004650155
a_148= 560.8419393211
a_149= 565.3890426629
a_150= 569.9416519995

One can compare this to Tommy's hypothetical candidate (quote below), and see that the (n^2)! in the denominator is growing much quicker than necessary, as compared to the empirical results. But one can also see that the denominator in the Taylor series coefficients for this asymptotic half iterate grow much faster than for the exp(z). The conjecture is as n gets arbitrarily large, for any z0>1, the slog(f^n(z0))/n~=0.5, and therefore this would be an entire function with half exponential growth.

tommy1729 Wrote:f1(z) = sum z^n/(n^2)!

The asymptotic half iterate function is defined such that all Taylor series coefficients are positive, and that the function is always less than but approaching the Kneser half iterate, for real(z)>0. The construction for the Taylor series I used is a somewhat complicated two stage process; I'll post more later. But the first stage is to note that if the Taylor series terms are all positive, than no individual Taylor series term can be larger than the desired sum, so we require that, \( \forall {x>0} \; a_n x^n \lt \text{half}(x) \). This gives an upper bounds for the Taylor series coefficients. This function is always bigger than the Kneser half(z) function. The second stage is to scale down the terms by observing that for each value of half(z), one particular Taylor series term is largest contributer to the Taylor series sum. That determines for each term how much to scale that term down by. I think the resulting equation will always be less than the Kneser half iterate, but the ratio of this function over the Kneser half iterate will approach arbitrarily close to 1. Because the function is bounded to the right by the Kneser half iterate, we can safely say that this assymptotic half iterate is entire, assuming the construction works for arbitrarily large values of z and arbitrarily large Taylor series terms.

Here are some example of calculations using this assymptotic half Taylor series, as compared to the Kneser half iterate. It would probably make sense to set a_0 of the half iterate to sexp(-0.5), which is the half iterate of 0. I will half to generate a complex plan plot for this half iterate...
z, assymptotic_half, Kneser_half
0 0 0.4985632879411
1 1.126644950749 1.646354233751
10 58.93202104249 61.48617436731
100 187646.5930113 192708.5721853
1000 425414280682.2 432750850493.0
10000 9.638915213265 E21 9.750966938073 E21
100000 5.362748331798 E37 5.406412389290 E37
1000000 3.362567348729 E60 3.382539228002 E60
10000000 2.187210706560 E92 2.196854946875 E92
100000000 2.935957885769 E135 2.945782901678 E135
1000000000 3.788233214763 E192 3.798003781412 E192
10000000000 5.577154174589 E266 5.588492690694 E266
100000000000 3.101249943705 E361 3.106297055696 E361
1000000000000 6.614359301415 E480 6.622925643007 E480

One obvious questions from the Taylor series result, that I can't answer, because I have no idea how fast these functions grow as x goes to infinity, relative to exponentiaton. What is the "growth" of a functions like these, which should grow slower than exponentiation, but faster than any polynomial?
\( \sum_{n = 1}^{\infty}\frac{x^n}{(2n)!} \)
\( \sum_{n = 1}^{\infty}\frac{x^n}{(4n)!} \)
Reply
#5
(05/10/2014, 12:14 PM)sheldonison Wrote: One obvious questions from the Taylor series result, that I can't answer, because I have no idea how fast these functions grow as x goes to infinity, relative to exponentiaton. What is the "growth" of a functions like these, which should grow slower than exponentiation, but faster than any polynomial?
\( \sum_{n = 1}^{\infty}\frac{x^n}{(2n)!} \)
\( \sum_{n = 1}^{\infty}\frac{x^n}{(4n)!} \)

I mentioned these before.

They are the " fake " exp(sqrt(x)) and exp(sqrt(sqrt(x))).

since a sqrt(x) is much closer to x than any positive iterate of a logaritm it follows that they both also have growth = 1.

These sums are related to the linear ordinary differential equations.
The first is cosh(sqrt(x)).
cosh(ln(x)) < 2cosh(sqrt(x)) < cosh(x).
SO growth = 1 follows.

---

Also like I said before the 0 terms do not change that much.
Maybe that makes more sense now.
---


Thanks for the data !


regards

tommy1729
Reply
#6
(05/10/2014, 12:14 PM)sheldonison Wrote: One obvious questions from the Taylor series result, that I can't answer, because I have no idea how fast these functions grow as x goes to infinity, relative to exponentiaton. What is the "growth" of a functions like these, which should grow slower than exponentiation, but faster than any polynomial?
\( \sum_{n = 1}^{\infty}\frac{x^n}{(2n)!} \)
\( \sum_{n = 1}^{\infty}\frac{x^n}{(4n)!} \)

Wow -- both of these functions grow exactly exponentially, where growth is defined as slog(f^n)/n.

The first function is
\( f = \sum_{n = 1}^{\infty}x^{n}{(2n)!} = \cosh(\sqrt{x})-1\approx \frac{1}{2}\exp(\sqrt{x}) \)

But If my math is correct then iterating f is the same as iterating \( \exp(x/4) \), which is ..... the same as iterating an exponential!!!

The second function is
\( f = \sum_{n = 1}^{\infty}\frac{x^{n}}{(4n)!} \approx \frac{1}{4}\exp(x^{0.25}) \)

Ok, the second function is a little more complicated, but if my math is correct, it is going to be the same as iterating \( f(x)=\exp(\frac{x}{16}) \), which .... drumroll ... is also exponential growth!

So, half exponential functions, especially the Taylor series of entire versions of half exponential functions need more study... Lets conjecture that I have a constructive definition of an entire half exponential Taylor series, for which I haven't given all of the details, but I have a pari-gp program. Then the Taylor series coefficients must eventually grow slower than all of these entire functions with exponential growth.... really interesting!!!
- Sheldon


Reply
#7
Code:
a_1= 1.289368074687
a_2= 2.685542084449
a_3= 4.481892104368
a_4= 6.478743767633
a_5= 8.617330003873
a_6= 10.86883764614
a_7= 13.21549097355
a_8= 15.64480318533
a_9= 18.14765340676
a_10= 20.71664375389
a_11= 23.34605172725
a_12= 26.03107295368
a_13= 28.76759185629
a_14= 31.55218923626
a_15= 34.38183207053
a_16= 37.25395409452
a_17= 40.16617725981
a_18= 43.11650203072
a_19= 46.10306787913
a_20= 49.12423329725
a_21= 52.17844415943
a_22= 55.26438661508
a_23= 58.38072977033
a_24= 61.52640919181
a_25= 64.70028943211
a_26= 67.90141159249
a_27= 71.12888751647
a_28= 74.38183008946
a_29= 77.65944293902
a_30= 80.96100674887
a_31= 84.28582227410
a_32= 87.63323120070
a_33= 91.00261369743
a_34= 94.39338557884
a_35= 97.80499516680
a_36= 101.2369199152
a_37= 104.6886902658
a_38= 108.1598101190
a_39= 111.6498571303
a_40= 115.1583810162
a_41= 118.6850011605
a_42= 122.2293283987
a_43= 125.7909699804
a_44= 129.3696195111
a_45= 132.9648897794
a_46= 136.5764573911
a_47= 140.2040557334
a_48= 143.8473645074
a_49= 147.5060942647
a_50= 151.1799685187
a_51= 154.8686996662
a_52= 158.5720762206
a_53= 162.2898267064
a_54= 166.0217152588
a_55= 169.7675306959
a_56= 173.5270093037
a_57= 177.2999786775
a_58= 181.0862092231
a_59= 184.8855000278
a_60= 188.6976766463
a_61= 192.5225306431
a_62= 196.3598778123
a_63= 200.2095425885
a_64= 204.0713710077
a_65= 207.9451779799
a_66= 211.8308191885
a_67= 215.7281180480
a_68= 219.6369408364
a_69= 223.5570997849
a_70= 227.4885071866
a_71= 231.4309843808
a_72= 235.3843931661
a_73= 239.3486192792
a_74= 243.3235140576
a_75= 247.3089677325
a_76= 251.3048375095
a_77= 255.3110228135
a_78= 259.3273875110
a_79= 263.3538324564
a_80= 267.3902471867
a_81= 271.4365024525
a_82= 275.4924928105
a_83= 279.5581471603
a_84= 283.6333119804
a_85= 287.7179392030
a_86= 291.8118947335
a_87= 295.9151023881
a_88= 300.0274504396
a_89= 304.1488634944
a_90= 308.2792519888
a_91= 312.4185117624
a_92= 316.5665730911
a_93= 320.7233354589
a_94= 324.8887317451
a_95= 329.0626821729
a_96= 333.2451078801
a_97= 337.4359118041
a_98= 341.6350372412
a_99= 345.8424068745
a_100= 350.0579336860
a_101= 354.2815588969
a_102= 358.5132133215
a_103= 362.7528106159
a_104= 367.0003160828
a_105= 371.2556293444
a_106= 375.5186854508
a_107= 379.7894500858
a_108= 384.0678314842
a_109= 388.3537801607
a_110= 392.6472189586
a_111= 396.9481223641
a_112= 401.2563957391
a_113= 405.5719986392
a_114= 409.8948585083
a_115= 414.2249480026
a_116= 418.5621839981
a_117= 422.9065248225
a_118= 427.2579174535
a_119= 431.6162930147
a_120= 435.9816335167
a_121= 440.3538537394
a_122= 444.7329047543
a_123= 449.1187668605
a_124= 453.5113755327
a_125= 457.9106702550
a_126= 462.3166165089
a_127= 466.7291677505
a_128= 471.1482630381
a_129= 475.5738880517
a_130= 480.0059669786
a_131= 484.4444725900
a_132= 488.8893596420
a_133= 493.3405854525
a_134= 497.7981092466
a_135= 502.2618743290
a_136= 506.7318543022
a_137= 511.2080081242
a_138= 515.6902967113
a_139= 520.1786803772
a_140= 524.6731047065
a_141= 529.1735644829
a_142= 533.6799885996
a_143= 538.1923555703
a_144= 542.7106288953
a_145= 547.2347567376
a_146= 551.7647258028
a_147= 556.3004650155
a_148= 560.8419393211
a_149= 565.3890426629
a_150= 569.9416519995

It seems 0.5 n (ln(n)-1) < a_n < 2 n (ln(n)-1).

Hence the new conjecture is 1/(n ln(n)) ! as Taylor coefficients.

regards

tommy1729
Reply
#8
(05/10/2014, 11:48 PM)sheldonison Wrote:
(05/10/2014, 12:14 PM)sheldonison Wrote: One obvious questions from the Taylor series result, that I can't answer, because I have no idea how fast these functions grow as x goes to infinity, relative to exponentiaton. What is the "growth" of a functions like these, which should grow slower than exponentiation, but faster than any polynomial?
\( \sum_{n = 1}^{\infty}\frac{x^n}{(2n)!} \)
\( \sum_{n = 1}^{\infty}\frac{x^n}{(4n)!} \)

Wow -- both of these functions grow exactly exponentially, where growth is defined as slog(f^n)/n.

The first function is
\( f = \sum_{n = 1}^{\infty}x^{n}{(2n)!} = \cosh(\sqrt{x})-1\approx \frac{1}{2}\exp(\sqrt{x}) \)

But If my math is correct then iterating f is the same as iterating \( \exp(x/4) \), which is ..... the same as iterating an exponential!!!

The second function is
\( f = \sum_{n = 1}^{\infty}\frac{x^{n}}{(4n)!} \approx \frac{1}{4}\exp(x^{0.25}) \)

Ok, the second function is a little more complicated, but if my math is correct, it is going to be the same as iterating \( f(x)=\exp(\frac{x}{16}) \), which .... drumroll ... is also exponential growth!

So, half exponential functions, especially the Taylor series of entire versions of half exponential functions need more study... Lets conjecture that I have a constructive definition of an entire half exponential Taylor series, for which I haven't given all of the details, but I have a pari-gp program. Then the Taylor series coefficients must eventually grow slower than all of these entire functions with exponential growth.... really interesting!!!
- Sheldon

Nice to see you agree.
But I kinda said all those things recently.


regards

tommy1729
Reply
#9
(05/10/2014, 11:58 PM)tommy1729 Wrote:
(05/10/2014, 11:48 PM)sheldonison Wrote: .....
Wow -- both of these functions grow exactly exponentially, where growth is defined as slog(f^n)/n.
....
\( f = \sum_{n = 1}^{\infty}x^{n}{(2n)!} = \cosh(\sqrt{x})-1\approx \frac{1}{2}\exp(\sqrt{x}) \)
....

Nice to see you agree.
But I kinda said all those things recently.

Agreed; and I'm sure I probably realized that before posting, my reply was also only a few minutes after your reply Smile Anyway sometimes I need time to think it all through. Now I need to switch from numerical approximations mode to theoretical mode, the end goal is to come up with an equation like the one you conjectured, which I am not yet able to do.

(05/10/2014, 11:58 PM)tommy1729 Wrote: Hence the new conjecture is 1/(n ln(n)) ! as Taylor coefficients.

We desire an asymptotic under-approximation of the half exp(x) with all positive Taylor series coefficients at x=0, based on the Kneser half exp(x) function, which is not entire. So we need
\( f(x) = \sum_{n = 1}^{\infty} a_n x^n \; < \; \exp^{0.5}(x) \)
\( \forall {x \gt 0 } \; \; a_n x^n \; < \; \exp^{0.5}(x) \)
\( \forall {x \gt 0 } \; \; \log(a_n x^n) \; < \; \log(\exp^{0.5}(x)) \)
\( \forall {x \gt 0 } \; \; \log(a_n) + n\log(x) \; < \; \log(\exp^{0.5}(x)) \)
\( \forall {x \gt 0 } \; \; \log(a_n) \; < \; \log(\exp^{0.5}(x)) - n\log(x) \)

For log(exp^0.5(x)), we can substitute exp^0.5(log(x)).
\( \forall {x \gt 0 } \; \; \log(a_n) \; < \; \exp^{0.5}(\log(x)) - n\log(x) \)
Next, replace x with log(x), and change the range from x>0, to all x
\( \forall {x } \; \; \log(a_n) \; < \; \exp^{0.5}(x) - nx \)


Conjecture, for each a_n, there is a particular value of x that most limits a_n. In other words, we conjecture that for each value of n, exp^0.5(x)-nx has one minimum value, where the derivative is zero.
\( \frac{d}{dx} \exp^{0.5}(x) - nx = -n + \frac{d}{dx} \exp^{0.5}(x)=0 \)

At the minimum, the derivative will be equal 0, so defining \( \text{dexphalf}(x)=\frac{d}{dx} \exp^{0.5}(x) \)
Now define
\( h_n = \text{dexphalf}^{-1}(n) \)
\( \log(a_n) \; < \; \exp^{0.5}(h_n) - n h_n \)
\( a_n \; < \; \exp(\exp^{0.5}(h_n) - n h_n)) \)

This is the first step of my generation of an entire approximation of f(x), and is probably the most important, by replacing the "<" with equal".

\( a_n = \exp(\exp^{0.5}(h_n) - n h_n) \)

Here is the first entire asymptotic approximation of halfx, where \( a_0=\exp^{0.5}(0) \)
\( f_1(x) = \sum_{n=0}^{\infty} a_n x^n \)

With this approximation f(x) will be an entire over approximation of the exp^0.5(x). To get a better approximation, use the same values from above, and scale the a_n values to generate b_n. Formally, we can get a better approximation by scaling, where we evaluate f1(x) at \( \exp(h_n) \), and compare to exp^0.5(exp(h_n)). Empirically, the scale factor is approximately 1/sqrt(n).
\( b_n=a_n\frac{f_1(\exp(h_n))}{\exp^{0.5}(\exp(h_n))} \)

Then a much better entire approximation of exp^0.5(x) is:
\( f_2(x) = \sum_{n=0}^{\infty} b_n x^n \)

For values of x on the order 1E10, the ratio of f_2(x) to exp^0.5(x) is accurate to about 1 part in 10000. I think the ratio of f_2(x) to exp^0.5(x) goes to exactly 1 as x goes to infinity. By the way, this f_2(x) is a little bit more accurate than the results I posted earlier in this thread, which used rough approximations for the minimum of a_n, instead of derivatives. One can also scale twice (or more times though the lower derivatives probably start oscillating); scaling twice, I get accuracy of 0.2 parts per million for f_3(1E10)/exp^0.5(1E10). By scaling three times, I get an accuracy of 1 part per billion, for f_4(1E10)/exp^0.5(1E10).

I hope the next step is to come up with a recursive equation, given any particular value of a_n, to come up with another a_n, for another much larger value of n. Specifically, can we come up with an approximation for how fast the factorial in the denominator grows? Specifically, given n, I can generate a_m, where h_n is the location used to evaluate a_n, from above. Starting with n, and a_n, the first two equations just use a_n from above.

\( h_n = \text{dexphalf}^{-1}(n) \)
\( a_n = \exp(\exp^{0.5}(h_n) - n h_n) \)

Skipping a lot of algebra, this is what I got to. I can include the algebra later. Here, we are generating a fractional Taylor series coefficient m, from the value n above, which would exactly match the equation above.

\( m(n) = \frac{n \times \exp(\exp^{0.5}(h_n))}{\exp(h_n)} \)
\( \log(a_m) = \exp(\exp^{0.5}(h_n))\times(-n+1) \)

I haven't been able to do anything useful in terms of limiting behavior (yet), but I only got this recursive relationship cleaned up a few minutes ago, this morning; I expect the relationship will be there! If I apply these equations to n=6, I get h_6=5.546380530883 and a_6=0.0000001055905600243, and m=700791.2, and log(a_m)= -149682094.7, which is correct. Then a_m = 1/9906980.030456! which is 1/(m*14.137)!, which actually matches Tommy's conjecture reasonably well since log(m)=13.4599.

Hopefully, the equations I just posted don't have too many typos.... If there are typos, I can say that I have a working pari-gp program that implements these mathematical equations (without typos).

The goal is to use this recursive relationship to prove something about f(n) where a_n = 1/f(n)!. Also, the second set of equations allows generating accurate recursive coefficients for much much larger values of n than the original equation. This could be used to check Tommy's conjecture....
- Sheldon
Reply
#10
I'm not sure if this will help, but I know some about taylor series and can do some fractional calculus Smile (Where can't we do fractional calculus)


If \( \phi \) is holo on \( \Re(z) < 1 \) and satisfies some fast growth at imaginary infinity and negative infinity. And if \( \phi(z) \neq \mathbb{N} \) in the half plane. Then fix \( 1>\tau > 0 \):

\( \frac{1}{2\pi i}\int_{\tau-i\infty}^{\tau+i\infty}\frac{\pi}{\sin(\pi z)\G(\phi(z))} w^{-z}\,dz = \sum_{n=0}^\infty \frac{w^n}{(\phi(-n))!} \)


Maybe that might help some of you? The unfortunate part is as \( w \to \infty \) we're going to get decay to zero. I'm not sure about the iterates. We can also note this is a modified fourier transform and so we can apply some of Paley Wiener's theorems on bounding fourier transforms from the original functions. I.e: We can bound the taylor series by the function in the integral. Therefore maybe if we get very fast decay to zero we can talk about asymptotics of \( 1/\exp^{0.5}(x) \).
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Tetration Asymptotic Series Catullus 18 10,275 07/05/2022, 01:29 AM
Last Post: JmsNxn
  Using a family of asymptotic tetration functions... JmsNxn 15 14,182 08/06/2021, 01:47 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 3,416 07/22/2021, 03:37 AM
Last Post: JmsNxn
  A Holomorphic Function Asymptotic to Tetration JmsNxn 2 3,543 03/24/2021, 09:58 PM
Last Post: JmsNxn
  An asymptotic expansion for \phi JmsNxn 1 2,836 02/08/2021, 12:25 AM
Last Post: JmsNxn
  Merged fixpoints of 2 iterates ? Asymptotic ? [2019] tommy1729 1 5,570 09/10/2019, 11:28 AM
Last Post: sheldonison
  Another asymptotic development, similar to 2sinh method JmsNxn 0 5,279 07/05/2011, 06:34 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)