Discussion on "tetra-eta-series" (2007) in MO
#31
(02/14/2023, 06:10 AM)JmsNxn Wrote: I'm going to move on to the second function Gottfried was talking about. So let's write:

\[
T(-s,3) = \zeta_G^{(2)}(s) = \sum_{n=1}^\infty (-1)^{n-1} n^{n^{n^{-s}}}\\
\]

This function is also holomorphic for \(\Re(s) > 0\). And I suspect for all higher orders. We start by writing:

\[
\zeta_G^{(2)}(s) = \sum_{k=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} \frac{\log(n)^k}{k!} n^{kn^{-s}}\\
\]

We expand again, and get:

\[
\zeta_G^{(2)}(s) = \sum_{k=0}^\infty \sum_{j=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} \frac{\log(n)^k}{k!} \frac{\log(n)^j}{j!} k^j n^{-js}\\
\]

Let's collect terms as a zeta sum:

\[
\zeta_G^{(2)}(s) = \sum_{m=1}^\infty g^{(2)}(m)m^{-s}\\
\]

We start by summing across \(k\) first; in which we get:

\[
h(n,j) = \sum_{k=0}^\infty \frac{\log(n)^k}{k!}k^j\\
\]

Where then we get:

\[
\zeta_G^{(2)}(s) = \sum_{j=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} h(n,j)\frac{\log(n)^j}{j!} n^{-js}\\
\]

We are then given the formula:

\[
g^{(2)}(m) = (-1)^{m-1} \sum_{n\le m} h(n,\frac{\log(m)}{\log(n)})\frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

And this function has the exact same growth as we saw before. Whereupon \(\zeta_G\) and \(\zeta_G^{(2)}(s)\) are at least holomorphic for \(\Re(s) > 0\).

I conjecture this continues for all \(\zeta_G^{(K)}(s)\) for all \(K\ge 0\).




\(K=3\) seems to behave this way too! Let's take:

\[
\zeta_G^{(3)}(s) = \sum_{n=1}^\infty (-1)^{n-1} n^{n^{n^{n^{-s}}}}\\
\]

Then:

\[
\zeta_G^{(3)}(s) = \sum_{k=0}^\infty \sum_{j=0}^\infty \sum_{i=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} \frac{\log(n)^k}{k!}\frac{\log(n)^j}{j!} \frac{\log(n)^i}{i!}k^j j^i n^{-s}\\
\]

Sum \(k,j\) first:

\[
h^{(2)}(n,i) = \sum_{k=0}^\infty \sum_{j=0}^\infty \frac{\log(n)^k}{k!}\frac{\log(n)^j}{j!}k^j j^i\\
\]

Which is simply:

\[
h^{(2)}(n,i) = \sum_{j=0}^\infty h(n,j) \frac{\log(n)^j}{j!}j^i\\
\]

Whereupon:

\[
g^{(3)}(m) = (-1)^{m-1} \sum_{n\le m} h^{(2)}(n,\frac{\log(m)}{\log(n)}) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

Where:

\[
\zeta_G^{(3)}(s) = \sum_{m=1}^\infty g^{(3)}(m) m^{-s}\\
\]

The rule appears to be quite clear:

\[
h^{(K)}(n,k) = \sum_{j=0}^\infty h^{(K-1)}(n,j) \frac{\log(n)^j}{j!}j^k\\
\]

Where then:

\[
g^{(K)}(m) = (-1)^{m-1} \sum_{n\le m} h^{(K-1)}(n, \frac{\log(m)}{\log(n)}) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

And:

\[
\zeta^{(K)}_G(s) = \sum_{m=1}^\infty g^{(K)}(m)m^{-s}\\
\]

Which are all holomorphic for \(\Re(s) > 0\). .....



I know it's not a proof yet; but I'm too lazy to write a whole paper to prove this. At most I'm going to do a notice, lmfao!  Tongue  \(h^{(K)}\) is an iterated summation operator; which just looks like an integral operator. So we are turning an iterated number of integrals \(K\) into how deep \(\zeta_G^{(K)}\) goes. Gottfried would refer to these as linear operators; or linear systems of Taylor Series.

Call \(h^{(0)} = 1\); and:

\[
h^{(K)}(n,k) = \sum_{j=0}^\infty \frac{\log(n)^j}{j!} h^{(K-1)}(n,j)j^{k}\\
\]

Then we can rewrite this as an operator \(\mathcal{H}\) which takes functions taking \(\mathbb{N}^2 \to \mathbb{R}^+\). So that \(h : \mathbb{N}^2 \to \mathbb{R}^+\); and \(\mathcal{H} h : \mathbb{N}^2 \to \mathbb{R}^+\).  That operator is precisely:

\[
\left\{\mathcal{H} h \right\} (n,k) = \sum_{j=0}^\infty \frac{\log(n)^j}{j!}h(n,j)j^k\\
\]

Where we write all of Gottfried's zeta functions as:

\[
\begin{align}
\zeta_G^{(K)}(s) &= \sum_{m=1}^\infty g^{(K)}(m)m^{-s}\\
g^{(K)}(m) &= (-1)^{m-1} \sum_{n\le m} \left\{\mathcal{H}^{K-1} 1\right\}(n,\frac{\log(m)}{\log(n)}) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!}\chi_m(n)\\
\end{align}
\]

Also; which is quick to grab; if \(h(n,k)\) is polynomial growth in \(\log n\) and \(k\); then \(\mathcal{H} h\) is too. And then, we are taking \(\log(m)\) growth within each polynomial. So we get polynomials in \(\log(m)\) growth. So there is no divergence, and \(m^{-\epsilon} g^{(K)}(m) \to 0\)--which looks like \(m^{-\epsilon} \log^K(m)\); and the oscillation between positives and negatives still happen. So we still have ABSOLUTE convergence of the zeta function series for \(\Re(s) > 1\), and CONDITIONAL convergence of the zeta function series for \(0 < \Re(s) \le 1\)...

This is for you matrix nerds  Tongue I might program this in my recursive language if you guys are interested. I can make some nice and efficient code...
I like the graphs, and I think this generalization into arbitrary heights is exciting! I'm really interested in the behaviour near the natural boundary, and in the region \(-1<\Re(s) < 0\), is it possible you could create some graphs in that region? I haven't really thought of a reasonable computational way to analytically continue the function there, do you have any thoughts?
Reply
#32
(02/16/2023, 10:28 AM)Caleb Wrote: I like the graphs, and I think this generalization into arbitrary heights is exciting! I'm really interested in the behaviour near the natural boundary, and in the region \(-1<\Re(s) < 0\), is it possible you could create some graphs in that region? I haven't really thought of a reasonable computational way to analytically continue the function there, do you have any thoughts?

*cue ptsd vietnam noises*

*ALSO, I may have fucked up some negative signs or some shit. The story is the same, though.*

We need a good asymptotic on \(A(x)\). So if we write:

\[
g(m) = (-1)^{m+1} \sum_{n\le m} \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

And:

\[
A(x) = \sum_{m \le \lfloor x \rfloor} g(m)\\
\]

I can show that \(A(x) = o(x^{\epsilon})\). Which is a fancy way of saying it is some kind of logarithmic. The crazy thing about \(A\), is that this is a fairly fast bound. I cannot show this; and I think it's fields' medal shit to show this, but:

\[
A(x) = C\log(x) + o(1)\\
\]



Now, from this, we can extend \(\zeta_G(s)\) to \(\Re(s) > -1\). But it's really fucking tricky; and we've already hoped for something, that may not be true. But, I'm going to continue like everything works out perfect Tongue 

We write:

\[
\zeta_G(s) = -s \int_1^\infty \left(A(x) - C\log(x)\right)x^{-s-1}\,dx - C s \int_1^\infty \log(x) x^{-s-1}\,dx \\
\]

The second function converges like:

\[
\begin{align}
C s \int_1^\infty \log(x) x^{-s-1}\,dx &= -Cs \frac{d}{ds} \int_1^\infty x^{-s-1}\,dx\\
&= Cs \frac{d}{ds} \frac{1}{s}\\
&= -\frac{C}{s}\\
\end{align}
\]

The first integral, is a bit trickier. We know that it's \(o(1)\) ( I can prove this). But I don't know HOW \(o(1)\) it is. I believe it's something like \(o(1) = O(\frac{1}{x})\) just from my own experiments. But I can't prove it. And "even though you see numerical evidence, doesn't make it true". Something every string theorist dark matter nerd should have shouted in their face!

But, for the moment we have:

\[
\zeta_G(s) = -s\int_1^\infty o(1)x^{-s-1}\,dx - \frac{C}{s}\\
\]

I can find the value \(C\) too. It's not that hard to find. But, it may not be a constant. It may just be:

\[
C = -0.235005249546161 + o(1)\\
\]

The general shape should definitely look like this; and I believe I can prove it. But for the moment this is pretty fucking accurate. I believe I am exact, but I may be off by some bullshit logarithmic decay garbage. Either way, this is still a very strong heuristic that these functions behave this way.

The function:

\[
B(s) = -s \int_1^\infty \left(A(x) - C\log(x)\right) x^{-s-1}\,dx\\
\]

Converges for \(\Re(s) > -1\). And it appears to happen pretty fucking efficiently. I can't prove it, but the numbers seem to support this hypothesis. Whereupon we write:

\[
\zeta_G(s) = B(s) - \frac{C}{s}\\
\]

Which is holomorphic for \(\Re(s) > -1\).

To cement my idea; if you take:

\[
A(x) - C\log(x) = h(x)\\
\]

Then \(h(1000) = 1E-16\). It already has 16 zeroes. So this is at least \(1/1000\) in the error. The function:

\[
A(x) - C\log(x) = O(1/x)\\
\]

Is perfectly fucking reasonable, and the numbers support it. If anything, this is the bare minimum. They support something more like \(O(1/x^2)\)--but I don't want to go there yet. So when I write:

\[
\zeta_G(s) = -s\int_1^\infty h(x)x^{-s-1}\,dx - \frac{C}{s}\\
\]

We can expect that \(h(x) = O(1/x)\), and therefore this entire expression converges for \(\Re(s) > -1\)--at least. It may be better Wink


Please note that this is just a rough guess of what looks like should happen. It may be more difficult. And may be more of a headache to pull out a result of meromorphic for \(\Re(s) > -1\). But all of my research is saying: \(\zeta_G(s)\) is meromorphic for at least \(\Re(s) > -1\) with a simple pole at \(s=0\)....

To explain the value \(C\) isn't that hard either; I have written:

\[
\begin{align}
\frac{A(x)}{\log(x)} = C + o(1)\\
\lim_{x\to\infty} \frac{A(x)}{\log(x)} = C\\
\end{align}
\]

If I could prove this limit everything in this post is valid! Once this happens pretty much everything happens as consequence. Where we can expect the error term to be \(O(1/x)\)....

Here's a graph of \(\zeta_G(s)\) for \(0 \le s \le 3\):

   

I'd bet my left nut the residue is \(C\); and subtracting this from the function, produces holomorphy for \(\Re(s) > -1\)-- at least.


I CANNOT STRESS ENOUGH; THIS IS JUST WHAT MY CALCULATOR IS POINTING TOWARDS. I HAVE NOT PROVEN THIS YET!


If \(C\) is not a constant, then it looks like \(C(x) = C_0 + \frac{C_1}{\log(x)} + \frac{C_2}{\log(x)^2}\). The majority of my discussion is still fine. But we have to deep dive more. We can't treat \(C\) as a constant, we have to treat it as an "almost constant". Which, speaking honestly, would produce singularities at the boundary in a dense manner. Which would explain the boundary of singularities Caleb is seeing at \(\Re(s)  = -1\).

I need to sleep now. But, I think we're on to something. I'll produce better code soon. I just want to debug and make sure everything works, with any input.
Reply
#33
All right, so \(C\) is not a constant. It is bounded by a constant, and that is all we're going to get. We can find a bound:

\[
|C(x)| \le C_0\\
\]

And that's about the best we're going to get. We can speed up some shit, but don't cross your fingers. We start with:

\[
C(x) = \frac{A(x)}{\log(x)}\\
\]

And the best we have is a constant bound as \(x\) grows. The \(\limsup_{x\to\infty}C(x)\) converges, and so does \(\liminf_{x\to \infty} C(x)\).

Here is \(C(x)\) from \(x=100\) up to \(x=1000\):

   

This means we have a strict display between each value; but not as good as we'd like.

I still believe we have meromorphy for \(\Re(s) > -1\); but it's gonna be harder than I thought......
Reply
#34
The residue of \(\zeta_G(s)\) at \(s=0\) is a value \(C\). I think I need to fuck with this more. And just run the sum and calculate the residue. Which will give us what to look for:

\[
\zeta_G(s) - \frac{C}{s} \,\,\text{is holomorphic at}\,\, s = 0\\
\]

The manner I've taken so far has solely been able to bound \(a \le C \le b\)............... but with some decent accuracy Tongue
Reply
#35
so, if we split the sum into positive parts and negative parts

f(x) = f+(x) + f-(x)

and we use my summation idea for both f+ and f- and then add them

does that give the same thing ??


regards

tommy1729
Reply
#36
(02/18/2023, 12:01 AM)tommy1729 Wrote: so, if we split the sum into positive parts and negative parts

f(x) = f+(x) + f-(x)

and we use my summation idea for both f+ and f- and then add them

does that give the same thing ??


regards

tommy1729

Could you elaborate more, Tommy?

Breaking:

\[
\zeta_G(s) = \sum_{m=1}^\infty |g(2m-1)|(2m-1)^{-s} - 2^{-s}\sum_{m=1}^\infty |g(2m)|m^{-s}\\
\]

Doesn't add too much to the discussion unless you can describe \(g(2m)\) and \(g(2m-1)\) in a descriptive manner; distinguish their behaviour. When we talk about \(A(x) = \sum_{m\le \lfloor x \rfloor} g(m)\); then we split this into positives and negatives:

\[
\begin{align}
A(x) &= \sum_{1\le m \le \lfloor \frac{x}{2} \rfloor} |g(2m-1)| - |g(2m)|\\
&= \sum_{1\le m \le \lfloor \frac{x}{2} \rfloor} |g(2m-1)| - \sum_{1\le m \le \lfloor \frac{x}{2} \rfloor} |g(2m)|\\
&= A^{\text{odd}}(x) - A^{\text{even}}(x)\\
\end{align}
\]

I don't see anything obvious that could be solved by doing this split; if you can see something: TELL ME! I'd love to see something!



To remind everyone of the following graph:

\[
\begin{align}
h(n,k) &= \sum_{j=0}^\infty \frac{\log(n)^j}{j!}j^k\\
g^{(2)}(m) &= (-1)^{m+1} \sum_{n\le m} h\left(n,\frac{\log(m)}{\log(n)}\right) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\chi_m(n) &= 1\,\,\text{if and only if there exists}\,\,k\in \mathbb{N},\,\,n^k = m,\,\,\text{otherwise}, 0\\
\end{align}
\]

The function \(A^{(2)}(x)\) is written as:

\[
A^{(2)}(x) = \sum_{m \le \lfloor x \rfloor} g^{(2)}(m)\\
\]

Here is a graph from \(1 \le x \le 1000\). This took 2 days to compile; please enjoy  Tongue

   

This looks exactly like \(O(x \log(x)^2)\)...
Reply
#37
(01/28/2023, 01:06 PM)Gottfried Wrote: In MO the user Caleb Briggs brings to discussion an old attempt of mine to the series \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \) .
I've tried this on 2007, and only later learned some techniques with which I might have assessed this with more success. But my basic observations and also the computations that I'd been able to do (over a small range of the exponent \(x\) ) come out to be correct.      

For the friends of visual data - there are some nice pictures to see there.

Here is the link : 

https://mathoverflow.net/questions/43666...eta-series   

Here the link to my fiddlings:

http://go.helms-net.de/math/tetdocs/Tetra_Etaseries.pdf  

Have fun...

Gottfried

@James :

I meant splitting the positive and negative parts of  \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \) and then using

 https://math.eretrandre.org/tetrationfor...p?tid=1688

my summability method on it.



regards

tommy1729
Reply
#38
(02/19/2023, 11:27 PM)tommy1729 Wrote:
(01/28/2023, 01:06 PM)Gottfried Wrote: In MO the user Caleb Briggs brings to discussion an old attempt of mine to the series \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \) .
I've tried this on 2007, and only later learned some techniques with which I might have assessed this with more success. But my basic observations and also the computations that I'd been able to do (over a small range of the exponent \(x\) ) come out to be correct.      

For the friends of visual data - there are some nice pictures to see there.

Here is the link : 

https://mathoverflow.net/questions/43666...eta-series   

Here the link to my fiddlings:

http://go.helms-net.de/math/tetdocs/Tetra_Etaseries.pdf  

Have fun...

Gottfried

@James :

I meant splitting the positive and negative parts of  \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \)  and then using

 https://math.eretrandre.org/tetrationfor...p?tid=1688

my summability method on it.



regards

tommy1729

OHHHHH!

That makes much more sense Tommy Tongue 

So you're saying, let's continuum sum:

\[
E(z,s) = \sum_{j=1}^z (2j)^{(2j)^{-s}}\\
\]

And:

\[
P(z,s) = \sum_{j=1}^z (2j-1)^{(2j-1)^{-s}}\\
\]

Where then; Gottfried's function equals:

\[
\zeta_G(s) = \lim_{z\to\infty} \left(P(z,s) - E(z,s)\right)\\
\]

I mean, I don't see any reason this shouldn't work. Any continuum sum method works. But I'm not sure if this allows us to take \(-1 < \Re(s) \le 0\). Especially considering there is a pole at \(s=0\)--which should muddy the waters incredibly. I think the key though, may be somewhere in here. But we want to first find that value \(C \in \mathbb{R}\), such that:

\[
\zeta_G(s) - \frac{C}{s} = H(s)\\
\]

Where \(H(s)\) is holomorphic at \(s=0\). Because I don't think your summability method works in neighborhoods of poles (it's equivalent to the fractional calculus indefinite sum). And when there's a pole in the indefinite sum, it means an extra residue is added in the contour integral representation. Which muddies the waters so to speak.

But for \(\Re(s) > 0\)--your expression should work perfectly fine. And works for any indefinite sum. The function:

\[
f(z,s) = z^{z^{-s}}\\
\]

Is in the Ramanujan space \(f(z,s) = O(e^{\rho|\Re(z)| + \tau|\Im(z)|})\) for \(\rho, \tau \in \mathbb{R}^+\), and \(|\tau| < \pi/2\). But only for \(\Re(s) > 0\). I think this will suffer the same problem as my approach for \(-1 < \Re(s) < 0\). And I'm not sure yet how to handle this.

But:

\[
\sum_{j=1}^z f(j,s)\\
\]

Is perfectly indefinitely summable.
Reply
#39
(02/21/2023, 06:10 AM)JmsNxn Wrote:
(02/19/2023, 11:27 PM)tommy1729 Wrote:
(01/28/2023, 01:06 PM)Gottfried Wrote: In MO the user Caleb Briggs brings to discussion an old attempt of mine to the series \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \) .
I've tried this on 2007, and only later learned some techniques with which I might have assessed this with more success. But my basic observations and also the computations that I'd been able to do (over a small range of the exponent \(x\) ) come out to be correct.      

For the friends of visual data - there are some nice pictures to see there.

Here is the link : 

https://mathoverflow.net/questions/43666...eta-series   

Here the link to my fiddlings:

http://go.helms-net.de/math/tetdocs/Tetra_Etaseries.pdf  

Have fun...

Gottfried

@James :

I meant splitting the positive and negative parts of  \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \)  and then using

 https://math.eretrandre.org/tetrationfor...p?tid=1688

my summability method on it.



regards

tommy1729

OHHHHH!

That makes much more sense Tommy Tongue 

So you're saying, let's continuum sum:

\[
E(z,s) = \sum_{j=1}^z (2j)^{(2j)^{-s}}\\
\]

And:

\[
P(z,s) = \sum_{j=1}^z (2j-1)^{(2j-1)^{-s}}\\
\]

Where then; Gottfried's function equals:

\[
\zeta_G(s) = \lim_{z\to\infty} \left(P(z,s) - E(z,s)\right)\\
\]

I mean, I don't see any reason this shouldn't work. Any continuum sum method works. But I'm not sure if this allows us to take \(-1 < \Re(s) \le 0\). Especially considering there is a pole at \(s=0\)--which should muddy the waters incredibly. I think the key though, may be somewhere in here. But we want to first find that value \(C \in \mathbb{R}\), such that:

\[
\zeta_G(s) - \frac{C}{s} = H(s)\\
\]

Where \(H(s)\) is holomorphic at \(s=0\). Because I don't think your summability method works in neighborhoods of poles (it's equivalent to the fractional calculus indefinite sum). And when there's a pole in the indefinite sum, it means an extra residue is added in the contour integral representation. Which muddies the waters so to speak.

But for \(\Re(s) > 0\)--your expression should work perfectly fine. And works for any indefinite sum. The function:

\[
f(z,s) = z^{z^{-s}}\\
\]

Is in the Ramanujan space \(f(z,s) = O(e^{\rho|\Re(z)| + \tau|\Im(z)|})\) for \(\rho, \tau \in \mathbb{R}^+\), and \(|\tau| < \pi/2\). But only for \(\Re(s) > 0\). I think this will suffer the same problem as my approach for \(-1 < \Re(s) < 0\). And I'm not sure yet how to handle this.

But:

\[
\sum_{j=1}^z f(j,s)\\
\]

Is perfectly indefinitely summable.

yes.

My method was designed for entire function, and those nasty residue issues are one of the reasons.

Things can get complicated with combining summability methods, analytic continuation , poles , singularities and continuum sums.

One of the nightmares is they disagree on where to take branch cuts. 
Then what ?

Im sure there are ways around these issues, but care should be taken.

I just proposed the method , not sure it even works well here or get nice results.
In particular the same result.

Poles were debates in old times when it came to continuum sums , analytic continuation and summability methods so your knowledge intuition or memory is correct , conscious or unconscious.
Cauchy principle value and +/- sign debates ring a bell.

I would not be surprised if Caleb found a solution.


regards

tommy1729
Reply
#40
(02/22/2023, 12:21 AM)tommy1729 Wrote:
(02/21/2023, 06:10 AM)JmsNxn Wrote:
(02/19/2023, 11:27 PM)tommy1729 Wrote:
(01/28/2023, 01:06 PM)Gottfried Wrote: In MO the user Caleb Briggs brings to discussion an old attempt of mine to the series \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \) .
I've tried this on 2007, and only later learned some techniques with which I might have assessed this with more success. But my basic observations and also the computations that I'd been able to do (over a small range of the exponent \(x\) ) come out to be correct.      

For the friends of visual data - there are some nice pictures to see there.

Here is the link : 

https://mathoverflow.net/questions/43666...eta-series   

Here the link to my fiddlings:

http://go.helms-net.de/math/tetdocs/Tetra_Etaseries.pdf  

Have fun...

Gottfried

@James :

I meant splitting the positive and negative parts of  \( T(x,2) = 1^{1^x} - 2^{2^x} + 3^{3^x} - \cdots + \cdots \)  and then using

 https://math.eretrandre.org/tetrationfor...p?tid=1688

my summability method on it.



regards

tommy1729

OHHHHH!

That makes much more sense Tommy Tongue 

So you're saying, let's continuum sum:

\[
E(z,s) = \sum_{j=1}^z (2j)^{(2j)^{-s}}\\
\]

And:

\[
P(z,s) = \sum_{j=1}^z (2j-1)^{(2j-1)^{-s}}\\
\]

Where then; Gottfried's function equals:

\[
\zeta_G(s) = \lim_{z\to\infty} \left(P(z,s) - E(z,s)\right)\\
\]

I mean, I don't see any reason this shouldn't work. Any continuum sum method works. But I'm not sure if this allows us to take \(-1 < \Re(s) \le 0\). Especially considering there is a pole at \(s=0\)--which should muddy the waters incredibly. I think the key though, may be somewhere in here. But we want to first find that value \(C \in \mathbb{R}\), such that:

\[
\zeta_G(s) - \frac{C}{s} = H(s)\\
\]

Where \(H(s)\) is holomorphic at \(s=0\). Because I don't think your summability method works in neighborhoods of poles (it's equivalent to the fractional calculus indefinite sum). And when there's a pole in the indefinite sum, it means an extra residue is added in the contour integral representation. Which muddies the waters so to speak.

But for \(\Re(s) > 0\)--your expression should work perfectly fine. And works for any indefinite sum. The function:

\[
f(z,s) = z^{z^{-s}}\\
\]

Is in the Ramanujan space \(f(z,s) = O(e^{\rho|\Re(z)| + \tau|\Im(z)|})\) for \(\rho, \tau \in \mathbb{R}^+\), and \(|\tau| < \pi/2\). But only for \(\Re(s) > 0\). I think this will suffer the same problem as my approach for \(-1 < \Re(s) < 0\). And I'm not sure yet how to handle this.

But:

\[
\sum_{j=1}^z f(j,s)\\
\]

Is perfectly indefinitely summable.

yes.

My method was designed for entire function, and those nasty residue issues are one of the reasons.

Things can get complicated with combining summability methods, analytic continuation , poles , singularities and continuum sums.

One of the nightmares is they disagree on where to take branch cuts. 
Then what ?

Im sure there are ways around these issues, but care should be taken.

I just proposed the method , not sure it even works well here or get nice results.
In particular the same result.

Poles were debates in old times when it came to continuum sums , analytic continuation and summability methods so your knowledge intuition or memory is correct , conscious or unconscious.
Cauchy principle value and +/- sign debates ring a bell.

I would not be surprised if Caleb found a solution.


regards

tommy1729
I'm currently working on a solution when looking at residues-- it looks like the question gets very difficult, but also very fascinating. In a few days I will probably finish writing a very long post that explores the whole poles problem.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 58,423 03/18/2023, 04:05 AM
Last Post: JmsNxn
Question Is the Tetra-Euler Number Rational? Catullus 1 3,384 07/17/2022, 06:37 AM
Last Post: JmsNxn
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 30,467 07/10/2022, 06:23 AM
Last Post: Gottfried
Question Tetration Asymptotic Series Catullus 18 22,754 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Natural Properties of the Tetra-Euler Number Catullus 6 10,134 07/01/2022, 08:16 AM
Last Post: Catullus
Question Formula for the Taylor Series for Tetration Catullus 8 13,921 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 3,775 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 3,615 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 6,953 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 10,834 03/21/2020, 08:28 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)