11/15/2011, 04:48 PM
(This post was last modified: 11/17/2011, 10:50 PM by sheldonison.)
minor edit. I dug out my basechange delta approximation program. For the base change function, the crossover frequency for three versus four iterated logarithms of the superfunction of (exp(z)-1) was at approximately the 2,700,000th taylor series term. For Tommysexp, it should be 2x that frequency.
\( \text{tommysexp}(z)=\log^{[3]}(\text{superfunction}_{\text{2sinh}}(z+k+3)) \).
If instead, you use four iterated logarithms, the first 5 million taylor [/color]series terms are virtually unchanged, to an accuracy of more than a million of digits. But eventually, the extra logarithm abruptly takes over the taylor series, and the radius of convergence switches. Here is k, and the first 30 terms of the taylor series of Tommysexp, with a radius of convergence of ~0.46. I should probably post the detailed work I did, which was really on the basechange function's taylor series at different points on the real axis, showing equations for how the taylor series changes with different numbers of logarithms, and showing the breakpoints in the taylor series caused by the iterated logarithms, before I forget it.
- Shel
(11/15/2011, 09:22 AM)sheldonison Wrote: I spent some time analyzing the derivatives of the basechange function, and the work can also be applied to tommysexp. I didn't post the results because they're complicated, and I didn't think they were good enough to provide a proof of nowhere analytic, but the results were very interesting. Here's a summary. The derivatives of the basechange/tommysexp functions act somewhat like a Russian stacking doll. For tommysexp, centered at x=0, with tommysexp(0)=1, the first 5 million or so derivatives are consistent with a radius of convergence of ~0.46, and appear to be well behaved. But then somewhere between the 5 millionth and the 6 millionth derivative, the radius of convergence abruptly changes to ~0.035. Then, again abruptly around the sexp(4)th derivative, the radius of convergence changes again, to approximately 0.0000000058. Near the sexp(5)th derivative, the radius of convergence abruptly changes to approximately 1/sexp(4). This goes on forever, with the radius of convergence getting arbitrarily small, but only for superexponentially huge derivatives.The third approximation is very good. This is taking the iterated logarithm three times of the superfunction of 2sinh, and then recentering around zero.
- Shel
\( \text{tommysexp}(z)=\log^{[3]}(\text{superfunction}_{\text{2sinh}}(z+k+3)) \).
If instead, you use four iterated logarithms, the first 5 million taylor [/color]series terms are virtually unchanged, to an accuracy of more than a million of digits. But eventually, the extra logarithm abruptly takes over the taylor series, and the radius of convergence switches. Here is k, and the first 30 terms of the taylor series of Tommysexp, with a radius of convergence of ~0.46. I should probably post the detailed work I did, which was really on the basechange function's taylor series at different points on the real axis, showing equations for how the taylor series changes with different numbers of logarithms, and showing the breakpoints in the taylor series caused by the iterated logarithms, before I forget it.
- Shel
Code:
k= 0.067838366070752225
a0= 1.0000000000000000
a1= 1.0914653607683951
a2= 0.27333490639412095
a3= 0.21521847924246615
a4= 0.065271503768026496
a5= 0.039165656430892704
a6= 0.017152131406819940
a7= 0.011705880632484224
a8= 0.0047195886155923871
a9= 0.0012366758967825547
a10= -0.0022628815033596365
a11= 0.0032155986809687338
a12= -0.0082015427101427038
a13= 0.0021877770703785160
a14= 0.047737214601532077
a15= -0.11724756374916340
a16= -0.078884922073312641
a17= 0.88303830799631482
a18= -0.61016681587894246
a19= -5.1047079727741415
a20= 7.8161210634700306
a21= 28.647958035475997
a22= -60.048152189155451
a23= -173.31480125153131
a24= 382.32333460877720
a25= 1156.1406836504859
a26= -2075.6510351695731
a27= -8124.1576973283600
a28= 8589.2177234097241
a29= 56026.718096127404
