Trying to find a fast converging series of normalization constants; plus a recap
#1
So, I've been asking myself, how do we normalize the beta method for each iteration. If we start with the naive idea that,

\[
\begin{align}
\widetilde{F}_{\lambda,b}^1(z) &= \beta_{\lambda,b}(z) - \log_b(1+\exp(-\lambda z))\\
\widetilde{F}_{\lambda,b}^n(z) &= \log^{\circ n}_b \beta_{\lambda,b}(z+n) = \beta_{\lambda,b} (z) + \tau_{\lambda,b}(z)\\
F^n_{\lambda,b}(z) &= \widetilde{F}_{\lambda,b}^n(z+k_n)\\

F^n_{\lambda,b}(0) &= \widetilde{F}_{\lambda,b}^n(k_n) = 1\\
\end{align}
\]

Where,

\[
\begin{align}
\beta_{\lambda,b}(z) &= \Omega_{j=1}^\infty \dfrac{e^{bt}}{e^{\lambda(j- z)} + 1} \bullet t\\
\beta_{\lambda,b}(z+1) &= \dfrac{e^{b\beta_{\lambda,b}(z)}}{e^{-\lambda z}+1}\\
\end{align}
\]

We start to approach a more effective programming.



First of all, I want to work through \(\beta_{\lambda,b}\).

This function is holomorphic everywhere that:

\[
\sum_{j=1}^\infty \dfrac{e^{bt}}{1+e^{\lambda(j-z)}}
\]

Converges compactly normally for \((t,z,b,\lambda)\)--as we throw \(t\) away, this just needs to work for \(t\approx 0\). All in all, this beast converges where ever:

\[
\sum_{j=1}^\infty \left| \left|\dfrac{e^{bt}}{1+e^{\lambda(j-z)}}\right| \right|_{\mathcal{K},\mathcal{S},\mathcal{B}, \mathcal{L}} < \infty\\
\]

Where \(t \in \mathcal{K}\) is a compact set of \(\mathbb{C}\). Where \(z \in \mathcal{S}\) is a compact set of \(\mathbb{C}\). And where \(b \in \mathcal{B}\) is a compact set of \(\mathbb{C}\). Finally \(\lambda \in \mathcal{L}\), a compact set of \(\mathbb{C}\). Checking where the sum has this summative property is elementary. And we get that,

\[
\begin{align}
\beta_{\lambda,b}(z)\,\,&\text{is holomorphic on}\,\,\mathbb{S}\\
\mathbb{S} &= \{(z,b,\lambda) \in \mathbb{C}^3\,|\,\Re \lambda>0,\, \lambda(j-z)\neq (2k+1)\pi i,\,\forall j,k \in \mathbb{Z},\,j\ge 1,\,b \neq 0\}\\
\end{align}
\]

Because this is exactly where the sum converges compactly normally. So, for all compact \(\mathcal{A} \subset \mathbb{S}\) we necessarily get that,

\[
\sum_{j=1}^\infty \left| \left|\dfrac{e^{bt}}{1+e^{\lambda(j-z)}}\right| \right|_{\mathcal{A}, |t|<R} < \infty\\
\]

And thus, the infinite composition \(\beta_{\lambda,b}(s)\) converges compactly normally. Again, written in the \omega notation:

\[
\beta_{\lambda,b}(z) = \Omega_{j=1}^\infty \dfrac{e^{bt}}{e^{\lambda (j-z)}+1}\,\bullet t\\
\]



The theory now is little more than, "Let's just iterate the log and hope we get lucky!". Now we do get fairly fucking lucky. But not lucky enough, as Sheldon has demonstrated. But we're not out of the woods yet; there's still a lot of malleability.

As to the point of this question/post; we can normalize:


\[
\begin{align}
F_{\lambda,b}^n(0) &= 1\\
F_{\lambda,b}^n(z) &= \beta_{\lambda,b}(z+k_n) + \tau_{\lambda,b}^n(z+k_n)\\
\end{align}
\]

And I can accomplish this no problem. The thing is, my recursive process for \(k_n\) is laborious to say the least.

If we introduce Sheldon's idea of an additive error:

\[
\begin{align}
\tau_{\lambda,b}^n(z) &= \sum_{j=0}^n \rho_{\lambda,b}^n(z)\\
\rho_{\lambda,b}^{n+1}(z) &= \log(1+\frac{\rho_{\lambda,b}^{n}(z+1)}{\beta_{\lambda,b}(z+1) + \sum_{j=0}^{n-1}\rho_{\lambda,b}^{j}(z+1)})\\
\end{align}
\]

I think we can reduce the above normalization constants; such that,

\[
k_n = \sum_{j=0}^n p_j\\
\]



And here's a Eureka! moment. This sum \(k_n\) may go to infinity as \(n\to\infty\). I've been noticing this with \(\sqrt{2}\), the normalization constant grows off hand.


Now the offhand is that we must program the normalization method using \(\rho\)--but, how do we do it on the nested normalized sum of \(p_n\). And this is where I'm scratching my head?! How do I do this efficiently?!

And, So, I'm just asking everyone if you can think of a good way of doing this. Whether it be programming or mathematically.

I've got a rough idea, and I'll update if I figure something out.


As an explicit description of the equation I'm trying to solve:

\[
\beta(\sum_{j=0}^n p_j) + \sum_{k=0}^n \rho_k(\sum_{j=0}^np_j) = 1
\]

And,

\[
F_n(z) = \beta(z+\sum_{j=0}^n p_j) + \sum_{k=0}^n \rho_k(z+\sum_{j=0}^np_j)\\
\]

So that we're adding a small normalization constant \(p_n\) at each step.

From here, the \(p_j\) sort of work as the exponential convergents Weierstrass inserts in his infinite products. They keep us in a normal neighborhood and don't affect the ultimate structure of the object. We still get the functional equation; but we're compounding the errors into a sum/limit:

\[
\sum_{j=0}^\infty p_j = k\\
\]

Or, at worst,

\[
\sum_{j=0}^n p_j = \mathcal{O}(n^{1-\epsilon})\\
\]

Which again, works just like a Weierstrass product in the final limit.


A final top off, before I hit the hay.

\[
F_{\lambda,b}^n(z) = 1+ b\cdot z\cdot a_\lambda + \mathcal{O}_\lambda(b\cdot z)^2\\
\]

And if we can make it such that:

\[
\rho_{\lambda,b}^n(z + k_n) = a_\lambda' \cdot b \cdot z + \mathcal{O}_\lambda(b\cdot z)^2\\
\]

Where \(\rho^n\) decays rather well. I need another click at the moment, I'll get back to you all in a bit. I can't stop scratching my head, and I might be getting ahead of myself, lol. The idea is everytime we take a \(\log\) we \(+1+p_n\) as opposed to just doing \(+1\).

Regards, James.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 3,137 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 2,451 02/22/2023, 08:58 PM
Last Post: tommy1729
  How fast are these type of sums ? tommy1729 5 274 02/17/2023, 11:48 PM
Last Post: tommy1729
Question Repeated Differentiation Leading to Tetrationally Fast Growth Catullus 5 1,041 07/16/2022, 07:26 AM
Last Post: tommy1729
Question Tetration Asymptotic Series Catullus 18 2,954 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 2,568 06/12/2022, 07:32 AM
Last Post: JmsNxn
  solving f(g(x)) = f(x) converging to f(exp(x)) = f(x) tommy1729 2 887 05/26/2022, 11:07 PM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 964 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 1,937 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 6,062 03/21/2020, 08:28 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)