Discussion on "tetra-eta-series" (2007) in MO
#21
(02/10/2023, 02:47 AM)JmsNxn Wrote: AHHH! I see, I apologize. I was confused!

Then yes we are on the exact same page Big Grin 

I've made some graphs I'll post in a bit, but the zeta series appears to converge for \(0 < \Re(s) \), where I presume we can analytically continue using Gottfried's expansion. I apologize!

If you are saying that:

\[
\sum (-1)^n n^{n^{-s}}
\]

converges for \(\Re(s) > -1\) we are on the same page. I actually haven't looked at this sum, I just instantly wrote it as a zeta function and only looked at that--so I wasn't sure where the original series converged.

It also appears the zeroes are on \(\Re(s) = 1/2\) on the critical strip... having Riemann Hypothesis vietnam flashbacks Shy


So I accidentally closed one of the graphs mid compile, but the other one finished. So here is:

\[
\nu(s) = \sum_{m=1}^\infty q(m) m^{-s}\\
\]

Where:

\[
q(m) = (-1)^m \sum_{n \le m} \chi_m(n)\\
\]

And \(\chi_m(n) =1\) if and only if \(n^k = m\) for some \(k \in \mathbb{N}\).

Then here is a graph from:

\(0.5 \le \Re(s) \le 5.5\) and \(|\Im(s)| \le 2.5\):



Didn't finish the graph for gottfried's function, it only finished most of the top half, so I'm going to recompile. I am also graphing the critical strip for both, to see what that could look like.

Good to see everything is set straight now. Also, to ease communication, I'll refer only to \( \sum (-1)^n n^{n^{-s}} \) so we are looking at the same object.

To make sure I understand your methods, initially 
\[ P(s) = \sum g(m) m^{-s} \]
converges only for \(\Re(s)<0\), otherwise the terms don't go to zero. However, using some complex analysis (abel sum formula), we can write 
\[ P(s) = -s \int_1^\infty A(x)x^{-s-1}\]
Then if we assume \( A(x) \) has growth smaller than a polynomial (in particular, it looks like \( A(x) \) has growth like the logarithm), then the second formula analytically continues the sum up to the point where \( \Re(s) > -1 \). 

Another thing to note, is that as \( \Re(s) \to -1 \) we get that the integral takes longer and longer to converge, so theoretically it should take on more and more of the choas of \( A(x) \) [In particular, if \( A(x) \) were completely random, then we would be forced to have a natural boundary, since nearby values of s would lead to very different results, so \( A(x) \) would be almost completely random]. If I understand correctly, questions about whether the function has a natural boundary should reduce into how 'predictable' \( A(x) \) is.
Reply
#22
(02/10/2023, 03:55 AM)Caleb Wrote: If I understand correctly, questions about whether the function has a natural boundary should reduce into how 'predictable' \( A(x) \) is.

Yes, and vice versa Tongue 

So far though, I've only been able to really grok that the zeta function (it's summation/integral form at least) only converges for \(\Re(s) > 0\). I'm taking your word it goes even to \(\Re(s) > -1\)--but it doesn't suprise me Wink
Reply
#23
Okay, I have a proof of two statements. I am not entirely there yet; but I'm pretty certain at this point. I like to take these moments to refresh everyone on what every function means.

We are going to start with \(\nu\) as I am more confident; and then I will move on to \(\zeta_G\). I would like to use these two symbols to denote our main functions. To begin we have that:

\[
\zeta_G(s) = \sum_{n=1}^\infty (-1)^{n+1} n^{n^{-s}}\\
\]

This is to mean, the leading term is \(1\). This is Gottfried's zeta function (or Gottfried's function for short). The conjecture at hand, by Caleb, is that this converges for \(\Re(s) > -1\). Where there is a divergence along the line:

\[
\sum_{n=1}^\infty (-1)^{n+1} n^{n^{1-it}}\\
\]

Where this series does not converge for \(t \in \mathcal{D} \subset \mathbb{R}\) where \(\overline{\mathcal{D}} = \mathbb{R}\)--so that \(\mathcal{D}\) is dense. Which means; for a dense amount of points we may begin to see singular behaviour. This is still open. We are going to always refer to \(s\) as the complex variable of a zeta function; and we'll adopt the Riemann convention of \(s = \sigma + it\) for \(\sigma \in \mathbb{R}\) and \(t \in \mathbb{R}\).

I refer to the earlier parts of this thread for clarifications on notation; but I'll try to remind everybody. The goal of this post is to show that:

\[
\zeta_G(s) \,\,\text{is holomorphic for}\,\,\Re(s) > 0\\
\]

And is represented in this half plane, by the zeta series:

\[
\zeta_G(s) = \sum_{m=1}^\infty g(m) m^{-s}\\
\]

While simultaneously showing that:

\[
A(x) = \sum_{1 \le m \le \lfloor x \rfloor} g(m) = O(x^{\epsilon})\\
\]

For all \(\epsilon > 0\). Which I conjecture is some kind of log growth; but could be even slower... who knows.


The function:

\[
q(m) = (-1)^m |\sqrt{\mathcal{I}_m}|\\
\]

Where

\[
\sqrt{\mathcal{I}_m} = \{n \in \mathbb{N}\,|\,n^k = m,\,\,k\in\mathbb{N}\}\\
\]

Where the indicator function \(\chi_m(n)\) expresses:

\[
q(m) = (-1)^m \sum_{1 \le n\le m} \chi_m(n)\\
\]

such that \(\chi_m(n) =1\) if and only if \(n^k = m\), and is zero otherwise. Upon which we can write our function:

\[
\nu(s) = \sum_{m=1}^\infty q(m)m^{-s}\\
\]

The function \(\nu\) doesn't look too related to \(\zeta_G\)--but they are very similar behaved. Looking at:

\[
Q(x) = \sum_{1\le m \le \lfloor x \rfloor} q(m)\\
\]

We equally look like \(O(x^{\epsilon})\)--just like \(A(x)\). This is supported by numerical evidence strongly; if anything I am being modest with these estimations.



The most obvious comparison between \(\nu\) and \(\zeta_G\) is:

\[
g(m) = (-1)^m \sum_{1 \le n \le m} \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_{m}(n)\\
\]

The value \(\frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} = \frac{\log(n)^k}{k!}\); where it clearly tends to zero as \(m\) and \(n\) go to infinity. So Gottfried's function has more decay in these coefficients than \(\nu\). The \(\nu\) zeta is a sort of natural boundary; where we replace \(\frac{\log(n)^k}{k!}\le 1\). This is a generous bound; but can tell us the "flavour" of Gottfried's function. This may look stupid; but we do replacements like this in zeta function talk all time. We are "guessing" that the asymptotics should be similar, and then we prove it.


The results I want to present are actually pretty simple. I want to show that:

\[
\begin{align}
A(x) &= O(x^{\epsilon})\,\,\text{for all}\,\,\epsilon > 0\\
Q(x) &= O(x^{\epsilon})\,\,\text{for all}\,\,\epsilon > 0\\
\end{align}
\]

So.... When we make the equivalence:

\[
\log(n)^k/k! := 1\\
\]

We perform the following:

\[
\zeta_G(s) = \sum_{n=1}^\infty (-1)^{n+1} n^{n^{-s}} = \sum_{n=1}^\infty (-1)^{n+1}\sum_{k=0}^\infty \frac{\log(n)^k}{k!}n^{-ks}\\
\]

becomes:

\[
\nu(s) = \sum_{n=1}^\infty \sum_{k=1}^\infty n^{-ks} = \sum_{n=1}^\infty (-1)^{n+1}\frac{n^{-s}}{1-n^{-s}}\\
\]

This series is anomalous as FUCK! but it converges in a similar manner that Gottfried's original series does. Again, we're looking at this conditional convergence nonsense kind of series. But Gottfried's used \(n^{n^{-s}}\); and \(\nu\) just kind of flat lines it to \(\frac{1}{1-n^{-s}}\). The cancellation in these sums is unfounded. I'm honestly blown away by how well these things cancel.




All numerical evidence points to two things right off the bat:

\[
\begin{align}
\zeta_G(s) &= \sum_{m=1}^\infty g(m)m^{-s}\,\,\text{converges absolutely as a series for}\,\,\sigma > 1\\
\nu(s) &= \sum_{m=1}^\infty q(m)m^{-s}\,\,\text{converges absolutely as a series for}\,\,\sigma > 1\\
\end{align}
\]

This isn't hard to prove because we have the modest bounds:

\[
\begin{align}
|g(m)| &\le \log(m) \Pi(m)\\
|q(m)| &\le \Pi(m)\\
\end{align}
\]

Where \(\Pi(m)\) was talked of before; and being of \(O(m^{\epsilon})\) growth for all \(\epsilon > 0\). (It's almost actually \(O(\log(m)^\epsilon)\) but I couldn't prove it; but it's closer to this bound). The hard thing to prove, which we are proving; is that:

\[
\begin{align}
\zeta_G(s) &= \sum_{m=1}^\infty g(m)m^{-s}\,\,\text{converges conditionally as a series for}\,\,0 < \sigma \le 1\\
\nu(s) &= \sum_{m=1}^\infty q(m)m^{-s}\,\,\text{converges conditionally as a series for}\,\,0 < \sigma \le 1\\
\end{align}
\]

I was racking my brain trying to figure out how to prove this; and what actually happens is very simple. The values:

\[
\begin{align}
\text{sign}(g(m)) = (-1)^{m+1}\\
\text{sign}(q(m)) = (-1)^{m+1}\\
\end{align}
\]

Where \(\text{sign}(a)\) gives whether \(a>0\) or \(a < 0\). The sequence \(g(m)m^{-\epsilon}\to 0\) and \(q(m)m^{-\epsilon} \to 0\). Therefore, since they oscillate between positive and negative perfectly; the alternating series test is enough to prove the zeta functions converge for \(\Re(s) > 0\).



And finally. We present our two results! We start with Perron's formula:

\[
A(x) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{\zeta_G(s) x^s}{s}\,ds\\
\]

But we can take any value \(c > 0\) here. Write: \(c = \epsilon > 0\). And:

\[
|A(x)| \le \frac{x^\epsilon}{2\pi}\left|\int_{-\infty}^{\infty} \frac{\zeta_G(\epsilon+it) x^{it}}{\epsilon+it}\,dt\right|\\
\]

The integral in this bound converges; but I'm not going to prove it for you. You guys gotta do some work!!! (Hint: Mellin Transform).

Same result follows for \(Q(x)\)...


In conclusion we have two plain as day results!

\[
\begin{align}
\zeta_G(s)\,\,&\text{is holomorphic for at least}\,\,\Re(s) > 0\\
A(x) &= O(x^{\epsilon})\,\,\text{for all} \,\,\epsilon > 0\\
\end{align}
\]

\[
\begin{align}
\nu(s)\,\,&\text{is holomorphic for at least}\,\,\Re(s) > 0\\
Q(x) &= O(x^{\epsilon})\,\,\text{for all}\,\, \epsilon > 0\\
\end{align}
\]




I've got more graphs incoming!! Maybe 1 or 2 days! It looks like \(\zeta_G\)'s zeroes ARE ON THE CRITICAL LINE!!!!!! Also when I say alternating series test; I mean Dirichlet's series test for the complex plane, which is just fancy alternating series test. Where when you study zeta functions "alternating series test" is meant to mean "alternating series test for zeta functions"--which there's an easy way to prove that since it converges for real numbers, it must converge for imaginary numbers too...
Reply
#24
(02/10/2023, 06:52 AM)JmsNxn Wrote: Okay, I have a proof of two statements. I am not entirely there yet; but I'm pretty certain at this point. I like to take these moments to refresh everyone on what every function means.

We are going to start with \(\nu\) as I am more confident; and then I will move on to \(\zeta_G\). I would like to use these two symbols to denote our main functions. To begin we have that:

\[
\zeta_G(s) = \sum_{n=1}^\infty (-1)^{n+1} n^{n^{-s}}\\
\]

This is to mean, the leading term is \(1\). This is Gottfried's zeta function (or Gottfried's function for short). The conjecture at hand, by Caleb, is that this converges for \(\Re(s) > -1\). Where there is a divergence along the line:

\[
\sum_{n=1}^\infty (-1)^{n+1} n^{n^{1-it}}\\
\]

Where this series does not converge for \(t \in \mathcal{D} \subset \mathbb{R}\) where \(\overline{\mathcal{D}} = \mathbb{R}\)--so that \(\mathcal{D}\) is dense. Which means; for a dense amount of points we may begin to see singular behaviour. This is still open. We are going to always refer to \(s\) as the complex variable of a zeta function; and we'll adopt the Riemann convention of \(s = \sigma + it\) for \(\sigma \in \mathbb{R}\) and \(t \in \mathbb{R}\).

I refer to the earlier parts of this thread for clarifications on notation; but I'll try to remind everybody. The goal of this post is to show that:

\[
\zeta_G(s) \,\,\text{is holomorphic for}\,\,\Re(s) > 0\\
\]

And is represented in this half plane, by the zeta series:

\[
\zeta_G(s) = \sum_{m=1}^\infty g(m) m^{-s}\\
\]

While simultaneously showing that:

\[
A(x) = \sum_{1 \le m \le \lfloor x \rfloor} g(m) = O(x^{\epsilon})\\
\]

For all \(\epsilon > 0\). Which I conjecture is some kind of log growth; but could be even slower... who knows.


The function:

\[
q(m) = (-1)^m |\sqrt{\mathcal{I}_m}|\\
\]

Where

\[
\sqrt{\mathcal{I}_m} = \{n \in \mathbb{N}\,|\,n^k = m,\,\,k\in\mathbb{N}\}\\
\]

Where the indicator function \(\chi_m(n)\) expresses:

\[
q(m) = (-1)^m \sum_{1 \le n\le m} \chi_m(n)\\
\]

such that \(\chi_m(n) =1\) if and only if \(n^k = m\), and is zero otherwise. Upon which we can write our function:

\[
\nu(s) = \sum_{m=1}^\infty q(m)m^{-s}\\
\]

The function \(\nu\) doesn't look too related to \(\zeta_G\)--but they are very similar behaved. Looking at:

\[
Q(x) = \sum_{1\le m \le \lfloor x \rfloor} q(m)\\
\]

We equally look like \(O(x^{\epsilon})\)--just like \(A(x)\). This is supported by numerical evidence strongly; if anything I am being modest with these estimations.



The most obvious comparison between \(\nu\) and \(\zeta_G\) is:

\[
g(m) = (-1)^m \sum_{1 \le n \le m} \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_{m}(n)\\
\]

The value \(\frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} = \frac{\log(n)^k}{k!}\); where it clearly tends to zero as \(m\) and \(n\) go to infinity. So Gottfried's function has more decay in these coefficients than \(\nu\). The \(\nu\) zeta is a sort of natural boundary; where we replace \(\frac{\log(n)^k}{k!}\le 1\). This is a generous bound; but can tell us the "flavour" of Gottfried's function. This may look stupid; but we do replacements like this in zeta function talk all time. We are "guessing" that the asymptotics should be similar, and then we prove it.


The results I want to present are actually pretty simple. I want to show that:

\[
\begin{align}
A(x) &= O(x^{\epsilon})\,\,\text{for all}\,\,\epsilon > 0\\
Q(x) &= O(x^{\epsilon})\,\,\text{for all}\,\,\epsilon > 0\\
\end{align}
\]

So.... When we make the equivalence:

\[
\log(n)^k/k! := 1\\
\]

We perform the following:

\[
\zeta_G(s) = \sum_{n=1}^\infty (-1)^{n+1} n^{n^{-s}} = \sum_{n=1}^\infty (-1)^{n+1}\sum_{k=0}^\infty \frac{\log(n)^k}{k!}n^{-ks}\\
\]

becomes:

\[
\nu(s) = \sum_{n=1}^\infty \sum_{k=0}^\infty n^{-ks} = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{1-n^{-s}}\\
\]

This series is anomalous as FUCK! but it converges in a similar manner that Gottfried's original series does. Again, we're looking at this conditional convergence nonsense kind of series. But Gottfried's used \(n^{n^{-s}}\); and \(\nu\) just kind of flat lines it to \(\frac{1}{1-n^{-s}}\). The cancellation in these sums is unfounded. I'm honestly blown away by how well these things cancel.




All numerical evidence points to two things right off the bat:

\[
\begin{align}
\zeta_G(s) &= \sum_{m=1}^\infty g(m)m^{-s}\,\,\text{converges absolutely as a series for}\,\,\sigma > 1\\
\nu(s) &= \sum_{m=1}^\infty q(m)m^{-s}\,\,\text{converges absolutely as a series for}\,\,\sigma > 1\\
\end{align}
\]

This isn't hard to prove because we have the modest bounds:

\[
\begin{align}
|g(m)| &\le \log(m) \Pi(m)\\
|q(m)| &\le \Pi(m)\\
\end{align}
\]

Where \(\Pi(m)\) was talked of before; and being of \(O(m^{\epsilon})\) growth for all \(\epsilon > 0\). (It's almost actually \(O(\log(m)^\epsilon)\) but I couldn't prove it; but it's closer to this bound). The hard thing to prove, which we are proving; is that:

\[
\begin{align}
\zeta_G(s) &= \sum_{m=1}^\infty g(m)m^{-s}\,\,\text{converges conditionally as a series for}\,\,0 < \sigma \le 1\\
\nu(s) &= \sum_{m=1}^\infty q(m)m^{-s}\,\,\text{converges conditionally as a series for}\,\,0 < \sigma \le 1\\
\end{align}
\]

I was racking my brain trying to figure out how to prove this; and what actually happens is very simple. The values:

\[
\begin{align}
\text{sign}(g(m)) = (-1)^{m+1}\\
\text{sign}(q(m)) = (-1)^{m+1}\\
\end{align}
\]

Where \(\text{sign}(a)\) gives whether \(a>0\) or \(a < 0\). The sequence \(g(m)m^{-\epsilon}\to 0\) and \(q(m)m^{-\epsilon} \to 0\). Therefore, since they oscillate between positive and negative perfectly; the alternating series test is enough to prove the zeta functions converge for \(\Re(s) > 0\).



And finally. We present our two results! We start with Perron's formula:

\[
A(x) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{\zeta_G(s) x^s}{s}\,ds\\
\]

But we can take any value \(c > 0\) here. Write: \(c = \epsilon > 0\). And:

\[
|A(x)| \le \frac{x^\epsilon}{2\pi}\left|\int_{-\infty}^{\infty} \frac{\zeta_G(\epsilon+it) x^it}{\epsilon+it}\,dt\right|\\
\]

The integral in this bound converges; but I'm not going to prove it for you. You guys gotta do some work!!! (Hint: Mellin Transform).

Same result follows for \(Q(x)\)...


In conclusion we have two plain as day results!

\[
\begin{align}
\zeta_G(s)\,\,&\text{is holomorphic for at least}\,\,\Re(s) > 0\\
A(x) &= O(x^{\epsilon})\,\,\text{for all} \epsilon > 0\\
\end{align}
\]

\[
\begin{align}
\nu(s)\,\,&\text{is holomorphic for at least}\,\,\Re(s) > 0\\
Q(x) &= O(x^{\epsilon})\,\,\text{for all} \epsilon > 0\\
\end{align}
\]




I've got more graphs incoming!! Maybe 1 or 2 days! It looks like \(\zeta_G\)'s zero ARE ON THE CRITICAL LINE!!!!!!
Damn this is cool-- zeroes on the critical line would be super exciting! Also, are you familiar with the argument prinicipal in complex analysis? We could use something like that to fairly precisely locate some zeroes of the function, which gives us a computational way to check \( \zeta_G \) is actually zero at the locations we think its zero. Anyways, please share the graphs you come up with in the coming days, I'm excited to see what you come up with!
Reply
#25
(02/10/2023, 07:09 AM)Caleb Wrote: Damn this is cool-- zeroes on the critical line would be super exciting! Also, are you familiar with the argument prinicipal in complex analysis? We could use something like that to fairly precisely locate some zeroes of the function, which gives us a computational way to check \( \zeta_G \) is actually zero at the locations we think its zero. Anyways, please share the graphs you come up with in the coming days, I'm excited to see what you come up with!

I am familiar with the argument principle, but I'm not proficient with it. I mean, I can prove it, and understand it; but I can't use it in my own proofs; because I don't reaaaaaally get it. As far as I see--just going off the graphs; the zeroes seem to be exactly at \(\Re(s) = 1/2\). So I'm just eyeballing it. A good way to test--as I see it. Is to take:

\[
\frac{1}{\zeta_G(s)} = \sum_{m=1}^\infty g^{\mu}(m) m^{-s}\\
\]

And check if:

\[
A^{\mu}(x) = \sum_{1\le m \le \lfloor x \rfloor} g^{\mu}(m) = O(\sqrt{x})\\
\]

And that this is a tight bound. This proves all the zeroes are less than \(\Re(s) = 1/2\).

........

I think I can find a reflection formula too, I'm not sure yet. But I believe; at least on the critical strip; there should be something like \(\zeta_G(1-s) = G(s) \zeta_G(s)\)--but this is still just a hunch. If you have that, and the growth conditions on \(A^\mu\)--we have that all the zeroes are on \(\Re(s) = 1/2\). And this is just a mock up of the Riemann hypothesis; lmfao.
Reply
#26
I fixed a small error in the post above; I'm correcting it here....



We are going to do another change of variables now. I wasn't sure to show this off yet; but I like it:

\[
\nu(s) = \sum_{m=1}^\infty (-1)^{m+1}\sum_{n\le m} \chi_m(n)m^{-s}\\
\]

Now let's interchange this sum so that \(n\) is first...

\[
\sum_{m=1}^\infty \sum_{n\le m} (-1)^{m+1}\chi_m(n)m^{-s} = \sum_{n=1}^\infty  \sum_{m=n}^\infty (-1)^{m+1}\chi_m(n)m^{-s}\\
\]

The sum:

\[
\sum_{m=n}^\infty \chi_m(n)m^{-s} = \sum_{k=1}^\infty n^{-ks}\\
\]

And the ACTUAL EXPANSION OF \(\nu\) is:

\[
\nu(s) = \sum_{n=1}^\infty (-1)^{n+1} \frac{n^{-s}}{1-n^{-s}}\\
\]

I forgot a \(n^{-s}\) before, I apologize. It doesn't change anything about the results provided; just for future work; I had a typo  Shy

This is the stupid result that I remembered. And why I even started a lot of this conversation. I knew how it worked. I've seen this formula before; and I was getting deja vu. I still don't remember which mathematician proved this. But I know I've seen this before. Someone pull out the ChatGPT or Google BARD to find out. lmao Tongue

I know these stupid types of "zeta" functions have a name. Where Gottfried's function would be of this type. They are all written in a similar manner.

My solution is still a considerable speed up for Gottfried and \(\nu\). Gottfried's expansion for \(\zeta_G\) and \(\nu\) are good when we're away from problem values. The zeta sums are more uniform with their convergence speed. More regular so to speak. So problem values aren't really problem values; but fast convergence areas, are slower. My code is much slower; but this is because I'm able to grab super high precision using different code. I can write 3-digit precision if you'd like; and my code would technically be faster. Just based off of how simple the zeta functions are.

Anyway; any questions, Caleb; keep asking them. Happy to answer ten times over!!!!!



The function \(\nu\) compiles much faster. So I'm posting these graphs first.

This is \(0 < \Re(s) < 1\) and \(0 < \Im(s) < 8\) (the critical strip). This is \(\nu\) WHICH HAS ZEROES EVERYWHERE!!!!! You can see them popping up near the boundary of the critical strip. They look like blackholes. If you see a "whitehole" it's a singularity--no white holes here though.

   

The second graph I am showing is of \(0 < \Re(s) < 5\) and \(-2.5 < \Im(s) < 2.5\).

Which shows just how beautiful, tame, well behaved, calm, this function is:

   

....... Therefore \(Q(x) = O(\log(x))\) Wink

EDIT: Also just to be transparent; the graphs of this function are done through the pari calculator as:

\[
\nu(s) = \sum_{m=1}^{300} q(m)m^{-s} + O(300^{-s})\\
\]

Where the \(O\) time is about \(O(N^{\epsilon})\) but it can blow up, lol. But these are still ridiculously accurate results. You won't see it in the graphs. But the first five digits are found for every value. Where at worst we border 4 digits. So, my code is super slow; but super accurate...Which has always been my achilles heel, lmao.



EDIT:

So here is a graph of \(\zeta_G(s)\) on the domain \(0 \le \Re(s) \le 1\) and \(0 \le \Im(s) \le 8\):

   

It looks like the zeroes are near \(\Re(s) = 1/2\), but they actually move towards \(\Re(s) =0\) as \(t \to \infty\). So I spoke too soon. The values \(\zeta_G(s_0) = 0\) slowly move to \(\sigma_0 \to 0\). They aren't fixed like the Riemann hypothesis. I apologize, I got too excited.

I think the correct answer is \(\zeta_G(s) \neq 0\) for \(\Re(s) > 1/2\)--and this is the best we're going to get....

This means a reflection formula is very unlikely. And means that the Gottfried sum, is the maximal analytic continuation. So that Caleb's comments on Polya are correct. There is probably a natural boundary. Good job  Tongue
Reply
#27
Here is the final graph; It finished compiling.

\[
\zeta_G(s)\\
\]

For \(0 \le \Re(s) \le 5\) and \(-2.5 \le \Im(s) \le 2.5\). This graph is at least \(4\)-digit accuracy.

   



Blackholes are zeroes. Intensity towards white is magnitude toward infinity. As \(|s| \to \infty\) we have \(\zeta_G \to 1\). Phase is mapped as red towards real; and blue towards negative... The colour coding is representative of the imaginary arguments.
Reply
#28
Let's write \(\rho \in \mathbb{C}\) were \(0 < \Re \rho < 1\). And let's say that we have a countable list of values \(\rho \in P\); where \(P\) is all values which satisfy:

\[
\zeta_G(\rho) = 0\\
\]

If we can find the values of \(\rho\); or guess and understand them. We'll be able to understand the function \(\zeta_G\). I'm too drunk right now; but this is what we need to look at!

Regards, James

I'm going to talk about the Hadamard product formula. Where "if the Riemann Hypothesis is true" we have a product formula for \(\zeta\). This is typically written as:

\[
\zeta(s) = H(s) \prod_{\rho} \left( 1-\frac{s}{\rho}\right)e^{\frac{s}{\rho}}\\
\]


Where \(H(s)\) is an entire function; and \(\rho\) is a list of a zeroes of \(\zeta\).

We have to be a bit more careful with \(\zeta_G\). But if we write this for \(\Re(s) >0\); we are given the exact same formula:

\[
\zeta_G(s) = H_G(s) \prod_{\rho \in P_G} \left( 1-\frac{s}{\rho}\right)e^{\frac{s}{\rho}}\\
\]

.......

The function \(H_G\) converges for \(\Re(s) >0\) and is non zero.... And as soon as we find the zeroes of \(\zeta_G\), we have found \(\zeta_G\) for \(\Re(s) > 0\).  The function \(H_G\) is pretty tame too. Especially because its wall of singularities is at \(\Re(s) = -1\)...

This might look super stupid. And I'm pretty drunk. But the function \(\zeta_G(s)\) looks like a Hadamard product of its zeroes.... The function:

\[
Z_G(s) = \prod_{\rho \in P_G} \left( 1-\frac{s}{\rho}\right)e^{\frac{s}{\rho}}\\
\]

Is the fucking function we care about; where \(\zeta_G\); is just a normalized version...


Lmao, I'm too drunk to give a proper analysis. But it looks something like this. My Old Riemann Hypothesis Training, is finally fucking paying off, lmao. I can write this clearer and more straight. But this is standard analytic number theory work!

If you take a function \(Z_G(s)\) which has zeroes exactly at the points which \(\zeta_G(s)\) has zeroes. Where additionally they are the exact same types of zeroes. This means that \(\zeta_G(s) = H_G(s) Z_G(s)\)... where \(H_G(s)\) is just a standard non zero function that maps \(\Re(s) >0 \to \mathbb{C}/0\)... Where the statement:

\[
\zeta_G(s) = H_G(s) Z_G(s) = H_G(s)\prod_{\rho \in P_G} \left( 1-\frac{s}{\rho}\right)e^{\frac{s}{\rho}}\\
\]

I'm super drunk right now. And just happy to talk math. I apologize if I'm being a dumbass Tongue
Reply
#29
I'm going to move on to the second function Gottfried was talking about. So let's write:

\[
T(-s,3) = \zeta_G^{(2)}(s) = \sum_{n=1}^\infty (-1)^{n-1} n^{n^{n^{-s}}}\\
\]

This function is also holomorphic for \(\Re(s) > 0\). And I suspect for all higher orders. We start by writing:

\[
\zeta_G^{(2)}(s) = \sum_{k=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} \frac{\log(n)^k}{k!} n^{kn^{-s}}\\
\]

We expand again, and get:

\[
\zeta_G^{(2)}(s) = \sum_{k=0}^\infty \sum_{j=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} \frac{\log(n)^k}{k!} \frac{\log(n)^j}{j!} k^j n^{-js}\\
\]

Let's collect terms as a zeta sum:

\[
\zeta_G^{(2)}(s) = \sum_{m=1}^\infty g^{(2)}(m)m^{-s}\\
\]

We start by summing across \(k\) first; in which we get:

\[
h(n,j) = \sum_{k=0}^\infty \frac{\log(n)^k}{k!}k^j\\
\]

Where then we get:

\[
\zeta_G^{(2)}(s) = \sum_{j=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} h(n,j)\frac{\log(n)^j}{j!} n^{-js}\\
\]

We are then given the formula:

\[
g^{(2)}(m) = (-1)^{m-1} \sum_{n\le m} h(n,\frac{\log(m)}{\log(n)})\frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

And this function has the exact same growth as we saw before. Whereupon \(\zeta_G\) and \(\zeta_G^{(2)}(s)\) are at least holomorphic for \(\Re(s) > 0\).

I conjecture this continues for all \(\zeta_G^{(K)}(s)\) for all \(K\ge 0\).




\(K=3\) seems to behave this way too! Let's take:

\[
\zeta_G^{(3)}(s) = \sum_{n=1}^\infty (-1)^{n-1} n^{n^{n^{n^{-s}}}}\\
\]

Then:

\[
\zeta_G^{(3)}(s) = \sum_{k=0}^\infty \sum_{j=0}^\infty \sum_{i=0}^\infty \sum_{n=1}^\infty (-1)^{n-1} \frac{\log(n)^k}{k!}\frac{\log(n)^j}{j!} \frac{\log(n)^i}{i!}k^j j^i n^{-s}\\
\]

Sum \(k,j\) first:

\[
h^{(2)}(n,i) = \sum_{k=0}^\infty \sum_{j=0}^\infty \frac{\log(n)^k}{k!}\frac{\log(n)^j}{j!}k^j j^i\\
\]

Which is simply:

\[
h^{(2)}(n,i) = \sum_{j=0}^\infty h(n,j) \frac{\log(n)^j}{j!}j^i\\
\]

Whereupon:

\[
g^{(3)}(m) = (-1)^{m-1} \sum_{n\le m} h^{(2)}(n,\frac{\log(m)}{\log(n)}) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

Where:

\[
\zeta_G^{(3)}(s) = \sum_{m=1}^\infty g^{(3)}(m) m^{-s}\\
\]

The rule appears to be quite clear:

\[
h^{(K)}(n,k) = \sum_{j=0}^\infty h^{(K-1)}(n,j) \frac{\log(n)^j}{j!}j^k\\
\]

Where then:

\[
g^{(K)}(m) = (-1)^{m-1} \sum_{n\le m} h^{(K-1)}(n, \frac{\log(m)}{\log(n)}) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

And:

\[
\zeta^{(K)}_G(s) = \sum_{m=1}^\infty g^{(K)}(m)m^{-s}\\
\]

Which are all holomorphic for \(\Re(s) > 0\). .....



I know it's not a proof yet; but I'm too lazy to write a whole paper to prove this. At most I'm going to do a notice, lmfao!  Tongue  \(h^{(K)}\) is an iterated summation operator; which just looks like an integral operator. So we are turning an iterated number of integrals \(K\) into how deep \(\zeta_G^{(K)}\) goes. Gottfried would refer to these as linear operators; or linear systems of Taylor Series.

Call \(h^{(0)} = 1\); and:

\[
h^{(K)}(n,k) = \sum_{j=0}^\infty \frac{\log(n)^j}{j!} h^{(K-1)}(n,j)j^{k}\\
\]

Then we can rewrite this as an operator \(\mathcal{H}\) which takes functions taking \(\mathbb{N}^2 \to \mathbb{R}^+\). So that \(h : \mathbb{N}^2 \to \mathbb{R}^+\); and \(\mathcal{H} h : \mathbb{N}^2 \to \mathbb{R}^+\).  That operator is precisely:

\[
\left\{\mathcal{H} h \right\} (n,k) = \sum_{j=0}^\infty \frac{\log(n)^j}{j!}h(n,j)j^k\\
\]

Where we write all of Gottfried's zeta functions as:

\[
\begin{align}
\zeta_G^{(K)}(s) &= \sum_{m=1}^\infty g^{(K)}(m)m^{-s}\\
g^{(K)}(m) &= (-1)^{m-1} \sum_{n\le m} \left\{\mathcal{H}^{K-1} 1\right\}(n,\frac{\log(m)}{\log(n)}) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!}\chi_m(n)\\
\end{align}
\]

Also; which is quick to grab; if \(h(n,k)\) is polynomial growth in \(\log n\) and \(k\); then \(\mathcal{H} h\) is too. And then, we are taking \(\log(m)\) growth within each polynomial. So we get polynomials in \(\log(m)\) growth. So there is no divergence, and \(m^{-\epsilon} g^{(K)}(m) \to 0\)--which looks like \(m^{-\epsilon} \log^K(m)\); and the oscillation between positives and negatives still happen. So we still have ABSOLUTE convergence of the zeta function series for \(\Re(s) > 1\), and CONDITIONAL convergence of the zeta function series for \(0 < \Re(s) \le 1\)...

This is for you matrix nerds  Tongue I might program this in my recursive language if you guys are interested. I can make some nice and efficient code...
Reply
#30
Alright; so a quick correction. It appears we lose steam as we go deeper into Gottfried's zeta functions. So that:

\[
\zeta^{(2)}_G(s) = \sum_{n=1}^\infty (-1)^{n+1} n^{n^{n^{-s}}}\\
\]

Only looks to be holomorphic for \(\Re(s) > 1\) (at least the zeta sum only converges here by the looks of it).  My guess is that we are compounding too many functions, and our growth is no longer \(\log(x)\) but \(x \log^2(x)\). If I were to wager a guess; this continues for \(\zeta^{(K)}(s)\), where we get growth like \(x^{K-1}\log^{K}(x)\) in the coefficients.

If we write:

\[
h(n,k) = \sum_{j=0}^\infty \frac{\log(n)^j}{j!}j^k\\
\]

And:

\[
g^{(2)}(m) = (-1)^{m+1} \sum_{n\le m} h\left(n,\frac{\log(m)}{\log(n)}\right) \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!} \chi_m(n)\\
\]

\[
A^{(2)}(x) = \sum_{m \le \lfloor x \rfloor} g^{(2)}(m)\\
\]

Then these seem to be growing much faster than \(\log(x)\); but they teeter out; and it looks close to \(x\log^2(x)\). At least the first 1000 terms seem to behave like this. We also see much more clearly a pole in \(\zeta_G^{(2)}(s)\) at \(s=1\); where for \(\zeta_G\) this happens at \(s=0\). So it seems pretty safe to say we are only holomorphic for \(\Re(s) > 1\). I think \(\zeta_G^{(K)}(s)\) is holomorphic for \(\Re(s) > K-1\) for \(K\ge 1\); and most of my trials seem to be supporting this.

Here is the function \(g^{(2)}(m)\) from \(1 \le m \le 1000\):

   

And a comparison graph of \(x\log(x)^2\):

   

Here is a graph of \(\zeta^{(2)}_G(s)\) for \(1 \le s \le 4\)--the pole is pretty obvious.

   

Here is the function \(A^{(2)}(x)\) from \(1 \le x \le 1000\):

[To be posted when it finishes compiling Dodgy ]




Honestly, I'm surprised by how fast this chaos is balancing out. It's a real fucking headache to program; but I'm sure this is working to an extent! Jesus these are some fucked up zeta functions lmao.

Regards, James
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 58,423 03/18/2023, 04:05 AM
Last Post: JmsNxn
Question Is the Tetra-Euler Number Rational? Catullus 1 3,387 07/17/2022, 06:37 AM
Last Post: JmsNxn
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 30,477 07/10/2022, 06:23 AM
Last Post: Gottfried
Question Tetration Asymptotic Series Catullus 18 22,754 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Natural Properties of the Tetra-Euler Number Catullus 6 10,134 07/01/2022, 08:16 AM
Last Post: Catullus
Question Formula for the Taylor Series for Tetration Catullus 8 13,921 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 3,775 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 3,616 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 6,953 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 10,834 03/21/2020, 08:28 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)