![]() |
|
rumor result - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://tetrationforum.org/forumdisplay.php?fid=3) +--- Thread: rumor result (/showthread.php?tid=710) |
rumor result - tommy1729 - 11/14/2011 hi all i dont have time to explain or post alot. but i managed to get some partial results about tetration i believe. i seem to have gotten a similar equation as andrew's slog ( the infinite matrix ) for my tommysexp(tommyslog(x)+k) in base e. also this seems to relate and possibly answer many uniqueness and existance questions that seemed somewhat unrelated and unattackable before. im still working on it but it seems promising and i might even solve some of those hard matrix questions. in a sense i also seem to have found a trick to attach a fixpoint to an infinite matrix equation. ( yes this is vague and informal , i dont have time to explain fully ) regards tommy1729 RE: rumor result - sheldonison - 11/15/2011 (11/14/2011, 09:35 PM)tommy1729 Wrote: hi allminor edit I spent some time analyzing the derivatives of the basechange function, and the work can also be applied to tommysexp. I didn't post the results because they're complicated, and I didn't think they were good enough to provide a proof of nowhere analytic, but the results were very interesting. Here's a summary. The derivatives of the basechange/tommysexp functions act somewhat like a Russian stacking doll. For tommysexp, centered at x=0, with tommysexp(0)=1, the first 5 million or so derivatives are consistent with a radius of convergence of ~0.46, and appear to be well behaved. But then somewhere between the 5 millionth and the 6 millionth derivative, the radius of convergence abruptly changes to ~0.035. Then, again abruptly around the sexp(4)th derivative, the radius of convergence changes again, to approximately 0.0000000058. Near the sexp(5)th derivative, the radius of convergence abruptly changes to approximately 1/sexp(4). This goes on forever, with the radius of convergence getting arbitrarily small, but only for superexponentially huge derivatives. - Shel RE: rumor result - sheldonison - 11/15/2011 minor edit. I dug out my basechange delta approximation program. For the base change function, the crossover frequency for three versus four iterated logarithms of the superfunction of (exp(z)-1) was at approximately the 2,700,000th taylor series term. For Tommysexp, it should be 2x that frequency. (11/15/2011, 09:22 AM)sheldonison Wrote: I spent some time analyzing the derivatives of the basechange function, and the work can also be applied to tommysexp. I didn't post the results because they're complicated, and I didn't think they were good enough to provide a proof of nowhere analytic, but the results were very interesting. Here's a summary. The derivatives of the basechange/tommysexp functions act somewhat like a Russian stacking doll. For tommysexp, centered at x=0, with tommysexp(0)=1, the first 5 million or so derivatives are consistent with a radius of convergence of ~0.46, and appear to be well behaved. But then somewhere between the 5 millionth and the 6 millionth derivative, the radius of convergence abruptly changes to ~0.035. Then, again abruptly around the sexp(4)th derivative, the radius of convergence changes again, to approximately 0.0000000058. Near the sexp(5)th derivative, the radius of convergence abruptly changes to approximately 1/sexp(4). This goes on forever, with the radius of convergence getting arbitrarily small, but only for superexponentially huge derivatives.The third approximation is very good. This is taking the iterated logarithm three times of the superfunction of 2sinh, and then recentering around zero. \( \text{tommysexp}(z)=\log^{[3]}(\text{superfunction}_{\text{2sinh}}(z+k+3)) \). If instead, you use four iterated logarithms, the first 5 million taylor [/color]series terms are virtually unchanged, to an accuracy of more than a million of digits. But eventually, the extra logarithm abruptly takes over the taylor series, and the radius of convergence switches. Here is k, and the first 30 terms of the taylor series of Tommysexp, with a radius of convergence of ~0.46. I should probably post the detailed work I did, which was really on the basechange function's taylor series at different points on the real axis, showing equations for how the taylor series changes with different numbers of logarithms, and showing the breakpoints in the taylor series caused by the iterated logarithms, before I forget it. - Shel Code: k= 0.067838366070752225RE: rumor result - tommy1729 - 11/15/2011 Dear Sheldon do you have the coefficients for tommysexp(tommyslog(x) + k) expanded at x = 0 with respect to x and some k : 0<k<1/2 and tommyslog(0)=0 ? or tommysexp(1)=1 thanks RE: rumor result - sheldonison - 11/16/2011 (11/15/2011, 08:40 PM)tommy1729 Wrote: Dear SheldonThese are the first 30 terms for the taylor series for tommyslog(z) at z=1 and z=0, where tommyslog(z) is the inverse of tommysexp(z). I previously posted the taylor series for tommysexp(0)=1 Code: Taylor series for tommyslog(z), inverse of tommysexp |