A first hires look at tetration \(\lambda = 1\) and \(b = e\)
#17
(11/08/2021, 10:52 PM)sheldonison Wrote:
(11/05/2021, 05:25 AM)JmsNxn Wrote: I've been making some different graphs of \(\beta\) and I got a good one to share.

...
All of these will be committed towards the asymptotic thesis of the beta function. Which is that the beta method approaches at least an asymptotic expansion at each point, as opposed to a Taylor series. This is compatible with everything I have been saying, and additionally compatible with Sheldon's work. This paper will entirely focus on ASYMPTOTIC behaviour. Which looks like tetration; but if you try and make it tetration, expect a good amount of errors.

very nice.  I spent a bit more time on b=sqrt(2), and I believe it converges it can be proven to converge analytically so long as the imaginary period of lambda is less than the imaginary period of the attracting fixed point=2.  So lambda=1 would be analytic, but lambda=0.3 would not converge, since
\(\frac{2\pi i}{0.3}>\frac{2\pi i}{\ln(\ln(2))}\)

I've been noticing something similar. Have you tried introducing constants \(p_n = \mathcal{O}(n^{1-\delta})\) for \(\lambda = 0.3\) (I'm working on trying to find a good estimate for \(\delta\) but it's \(>0\))? I believe the standard iteration:

\[
\log^{\circ n} \beta_{0.3}(s+n) \to 4\,\,0 < |\Im(s)| < 2 \pi/0.3\\
\]

Rather quickly; and on the real line we get all the branch cut nonsense of tetration \(s \in (-\infty,-2)\); but I've been seeing some nice results if you use:

\[
\log^{\circ n} \beta_{0.3}(s+n+p_n)\\
\]

Where we assume that:

\[
\log^{\circ n} \beta_{0.3}(n+p_n) = 1\\
\]

I haven't been able to do this efficiently yet, but for the small amount of cases I've managed this seems to encourage a better convergence. The way I think about it, is that:

\[
\beta_{0.3}(s+1) = \exp_{\sqrt{2}}(\beta(s))/(1+e^{-0.3s})\\
\]

Doesn't behave well enough like \(\exp_{\sqrt{2}}^{\circ n}(s)\) and moves too close to \(4\) and the \(\log\)'s just push us too rapidly to \(4\)--so we need a kind of function \(p_n\) which compensates, and pushes us closer to \(2\) so that the \(\log\)'s don't take over and push us to \(4\) too fast. No less than we need to accelerate the convergence of \(\beta\) by some sequence of constants \(p_n = \mathcal{O}(n^{1-\delta})\).

Where for cases like \(b = e\) and \(\lambda = 1\) we get that \(\delta \ge 1\), but for some cases \(0 < \delta < 1\). In which, we have to over compensate to get the result.
Reply


Messages In This Thread
RE: A first hires look at tetration \(\lambda = 1\) and \(b = e\) - by JmsNxn - 11/08/2021, 11:44 PM



Users browsing this thread: 1 Guest(s)