Posts: 1,214
Threads: 126
Joined: Dec 2010
02/02/2023, 07:31 AM
(This post was last modified: 02/02/2023, 07:44 AM by JmsNxn.)
Okay, so the Euler Formula I was able to get is not as nice as I'd hoped, but still pretty good.
I still think there is a better one out there. We start by writing:
\[
g(m) = (-1)^{m+1}\sum_{n\le m} \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!}\chi_m(n)\\
\]
This is naively bounded as:
\[
|g(m)| \le \log(m) |\sqrt{\mathcal{I}_m}|\\
\]
If you don't trust my math on that, as it is a little tricky--this entire discussion can be done with the bound \(|g(m)| \le \log(m)^{\log(m)} |\sqrt{\mathcal{I}_m}|\), and everything still applies  .
For Mphlee's radical. Now we can bound:
\[
|\sqrt{\mathcal{I}_m}| \le \Pi(m)\\
\]
We can also bound:
\[
\log(m) \le m\\
\]
So we can bound:
\[
|g(m)| \le m \Pi(m)\\
\]
This is not the best bound (a lot of this is using pretty weak bounds). We can actually strengthen this by finding \(\tau : \mathbb{N}\to\mathbb{N}\) such that \(\tau(m) \ge \log(m)\) but \(\tau(ab) = \tau(a) \tau(b)\) when \((a,b) = 1\) are coprime. I'm just using \(\tau(m) =m\) at the moment. So now, let's take:
\[
T(x,2) = \sum_{n=1}^\infty n^{n^{-x}} = \sum_{m=1}^\infty g(m)m^{-x}\\
\]
We can bound, for \(\Re(x) = \sigma\):
\[
|T(x,2)| \le \sum_{m=1}^\infty m\Pi(m)m^{-\sigma} = |T|(\sigma)\\
\]
Now I suggest to anyone who hasn't seen this Master Class result by Euler, to go look at it instantly! https://en.wikipedia.org/wiki/Proof_of_t...a_function This is a great result to learn the history of, because it exemplifies just how bananas Euler was. This is largely considered the first Analytic Number Theory result. He used it to create the product expansion of \(\frac{\pi^2}{6}\) using primes!!!!!
I'm going to go quick, but we begin by writing:
\[
|T|(\sigma) = \prod_{p \,\text{prime}} \left(1+\sum_{j=1}^\infty p^j \sigma(j) p^{-j\sigma}\right)\\
\]
Where the product is over all prime numbers.... To explain this, think of how the product:
\[
\left(1 + a^1 + a^2 +....\right)\left(1 + b^1 + b^2 +....\right) = \sum_{j=1}^\infty \sum_{k=1}^\infty a^j b^k\\
\]
We are doing the same thing, but \(a\) and \(b\) are primes. And by the fundamental theorem of arithmetic, every number is represented. The function \(\Pi(mn) = \Pi(m) \Pi(n)\) when \((m,n)=1\), and additionally the prime powers are just the divisor function...
Also, I would like to note, as I haven't at this point, that this is an analytic continuation of \(T(x,2)\), as I've constructed it. As it necessarily converges in a half plane. I never actually proved these things converged, until this bound I just gave. I do believe we can analytically continue this even further. I am optimistic there exists some kind of reflection formula. I'm not upto date on most advanced zeta function stuff; but I believe we can create at least an analytic continuation of \(T(x,2)\) do a bigger domain than \(\Re(x)>1\). I'd put my money on at least \(\Re(x) > 1/2\), we may have trouble in the critical strip, simply because of the way \(g(m)\) looks a lot like a product of divisor functions--and this encodes too much info about primes....
But this is really important to note, because the function \(T(x,2)\) is holomorphic in weird domains as \(\Re(x) < 0\). Where, again, I did the variable change \(x \mapsto -x\) at the beginning of this discussion. So the function Gottfried is detailing in his work, is the left half plane of the function I am detailing. So we're almost there with an analytic continuation....... You would call Gottfried's work the "zeta regularization"--which is just a fancy way of saying the analytic continuation to the left of the half- plane the zeta sum converges at.
But this successfully analytically continues Gottfried's function, to at least \(\Re(x) > 1\), where for \(\Re(x) > 2\) we have a cool prime bound. Gottfried's concerns were with \(\Re(x) < 0\) as far as I can tell. This leaves us, low and behold, with the troublesome "critical strip" of \( 0 < \Re(x) < 1\). This is solvable. I'll post the analytic continuation soon. This is a result using Abel's interpretation of Zeta functions, and Abel sums and integrals acting on it. It shouldn't be hard at all because we have such nice Abel sums with plain bounds...
So mostly so far, I have shown that Gottfried's sum is formally equal to the sum I wrote. I have shown my sum converges, and additionally Gottfried's sum does converge in some of the areas my sum converges. So it is a viable analytic continuation... I'm very interested to see if we can extend this function cleaner into one function...
Posts: 404
Threads: 33
Joined: May 2013
02/05/2023, 06:53 PM
(This post was last modified: 02/05/2023, 08:53 PM by MphLee.)
(01/31/2023, 11:46 PM)JmsNxn Wrote: EDIT: Just want to say I know what a characteristic function is. And I know what a radical is. But I didn't think this was a radical, so that's super cool, that this is a radical... lol I only know radicals from Analytic Number Theory, and I def wouldn't of called my function that. Still super cool! Thanks for your comments!!!!!!!
Excuse me... my answer was triggered by your comment
Quote:The function \(\chi_m(n)\) is called something. I can't remember right now. It's on the tip of my tongue...
Btw I can't claim that this brings more than a equivalent way to formulate the problem. I really have not energy to work on it, as much as I would like. I conjecture that it maybe could based on the reasoning that many number theory problems were solved by translating them into ideal-theoretic language and then straight into algebraic geometry.
The story goes like this. Fix a base ring \(A\) for the rest of our discussion. On one hand we have Ideals, special subsets of rings, while on the other have we have algebraic sets, subsets of some ambient space, think of them as geometric figures whose points can be isolated as solutions of some equation, loci of equations in the language of the fixed ring.
Given a subset \(S\subseteq A[x] \) of ring of polynomial functions we look at the figure they define, as the locus over which they vanish, in the ambient space \({\mathcal V}(S)\subseteq A\).
In the other direction, given a set of points, a figure, in the space \(\Phi \subseteq A\) we ask for the ideal of polynomial functions over it that makes it algebraic... think you are Decartes finding a system of equation that has \(\Phi\) as its solution sets, i.e. that describe that geometric figure: call that ideal \({\mathcal I}(\Phi)\). We get a correspondence going back and forth between ideals of the ring \(A[x]\) and geometric figures, affine sets, of the ambient space \(A\).
\[{\mathcal V}:{\rm ideals}(A[x])
\begin{align}
\longleftarrow\\
\longrightarrow\\
\end{align} {\rm affine\,sets}(A):{\mathcal I}
\]
Those arrows aren't inverse but in some correspondence, Galois correspondence if I recall correctly.. I'm rusty on this, but it is something like they are adjoints.
If you perform on an ideal \(\mathcal V\) followed by \(\mathcal I\) you get the radical operation \(\sqrt: {\rm ideals}(A)\to {\rm ideals}(A)\) ... that is idempotent and a closure operator. But by Grothendieck, we can extend this from polynomial rings to every ring... considering every element of a ring as a kind of function over the spectrum (the Zariski prime spectrum) of that ring, thus defining curves over it.
\[{\mathcal V}:{\rm ideals}(A)
\begin{align}
\longleftarrow\\
\longrightarrow\\
\end{align} {\rm affine\,sets}({\rm spec}(A)):{\mathcal I}
\]
This is very sketchy, I know, and not rigorous on fine details but that mostly the big picture.
The point is... our cases is much more complex than this because we are not working over a ring... but over the monoid of natural numbers under multiplication... so in a sense those sets and your formulas belong to some sort of arithmetic geometry over the natural numbers. When we ask for the radical of a number, and we want to interpret it ideal theoretically we are implicitly thinking of numbers as if they were polynomial functions over a mysterious underlying space... something filling this gap
\[{\mathcal V}:{\rm ideals}(\mathbb N)
\begin{align}
\longleftarrow\\
\longrightarrow\\
\end{align} {\rm affine\,sets}(???):{\mathcal I}
\]
I'm very ignorant... and I don't know exactly how to use all of this. But I know that this is deep and it is related to the passage from integers to the rational numbers... and for arbitrary monoid, e.g. monoid of functions, to integer powers to rational iteration and thus to the discussion we were having in the last thread. If you add that integer iterates carry information on periodic points of a map you have the link to the Artin-Mazur zeta function.
But man.... I think is hard as doing non-commutative geometry... or algebraic geometry over the field of one element, motivic cohomology and that scary stuff of Grothendieck/Konstevich/Connes level.
But I'll stop here, since I do not have anything of value to offer to Gottfried and his problem, again I'm sorry.
Mother Law \(\sigma^+\circ 0=\sigma \circ \sigma^+ \)
\({\rm Grp}_{\rm pt} ({\rm RK}J,G)\cong \mathbb N{\rm Set}_{\rm pt} (J, \Sigma^G)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
02/06/2023, 05:49 AM
(This post was last modified: 02/06/2023, 07:20 AM by JmsNxn.)
Yes, I've definitely only seen radicals in the sense of ideals; or in the sense of Ring Theory. I never really graduated far enough into Algebraic Geometry--but a lot of the results are generalizations of things from Ring Theory, and the like. I've only ever studied these things in context of zeta functions--as they can be used to create crazy zeta functions, and then prove results about the natural numbers/primes. And that's about the amount I've used it.
I honestly don't think we will need all the fancy stuff to analytically continue Gottfried's function. A lot of "analytically continuing zeta function" talk is just getting decently behaved asymptotics on the coefficients of the zeta function. But then, the actual details and discussions of the object, would be trying to prove where the zeroes are, to get more accurate/better asymptotic information on the coefficients.
I don't think Gottfried really cares too much about that--and more so, it seems he's just interesting in showing the zeta function is analytic. Which I've gotten for \(\Re(s) > 1\)--getting to \(0 < \Re(s)\) should be pretty easy; as we are just speeding up the sum to get it to converge for \(0 < \Re(s)\). Getting it for \(s \in \mathbb{C}\) is much more difficult though--and is usually accomplished with a reflection formula. Reflection formulas are egregiously hard, and found on case to case bases--but they all tend to follow the same pattern Riemann/Dirichlet laid out.
We might get lucky though, and find something about submultiplicative arithmetic functions, where \(\tau(ab) \le \tau(a) \tau(b)\) where \(a\) and \(b\) are coprime, but I'm not sure. Either way, I'm confident we can get this to \(\Re(s) > 0\); as this analytic continuation is pretty standard and we have a good bound of \(|g(m)| \le \log(m) |\sqrt{\mathcal{I}_m|}\) which has a pretty modest growth...
I think I'll ask a question on mathoverflow-- because this looks very similar to somethings I've seen before. But I don't remember the names of these things (been 3 years since I've been in an analytic number theory class or read a book on it). But I'm going to go through some of my number theory books, to see if something sparks a memory...
I'm going to do the run through for a sanity check...
write:
\[
\begin{align}
A(m) & =\sum_{1\le n \le m} g(n)\\
\sum_{1 \le m \le t} g(m)m^{-x} &= A(t)t^{-x} -x\int_1^t A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\end{align}
\]
Taken straight from:
https://en.wikipedia.org/wiki/Abel%27s_s...on_formula
So, we know the growth of \(A(t)\) is probably about \(t\log(t)\), so for \(\Re(x)>1\) we have the expression:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
Now this is only good for \(\Re(x) > 1\), but it gives us a new expression to work with, which can be analytically continued. I remember how to do this with the zeta function, but it's escaping me at the moment on how to do this with Dirichlet series (of which our function is more similar to)...
Just read this to refresh my memory... https://math.colorado.edu/~rohi1040/expo...alytic.pdf
Okay, so bit of a curve ball; we need to do a few more steps, and I think we have to work with \(\chi_m(n)\) a bit more vigilantly. For the moment, I'm going to assume that:
\[
A(\lfloor y \rfloor) \le \lfloor y \rfloor \log \lfloor y \rfloor\\
\]
So, then:
\[
|T(x,2)| \le |x \int_1^\infty (y \log y - \lfloor y \rfloor \log \lfloor y \rfloor) y^{-x-1}\,dy + \frac{x}{(x-1)^2}|
\]
The expression on the right converges for \(\Re(x) > 0\), but I'm not remembering how to do this to make \(T(x,2)\) bounded by this for more than \(\Re(x) >1\). I know there's a way, I'm just not remembering. FFS. I'll keep reading, lmao.
Edit: Nvm, I'm an idiot. The equation I conjecture to analytically continue for \(\Re(x) > 0\) is pretty simple. I'll walk you through it.
We start with:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
We assume \(A(\lfloor y \rfloor ) \le y \log(y)\). The problem we need is that we need a tight bound on this so that:
\[
A(\lfloor y \rfloor) - y \log(y) = O(1)\\
\]
If this bound doesn't work, I wouldn't be surprised if it's something like \(y^{1+1/2}\log(y)\). Or something like this, which will be tight enough. We have a bit of leeway here, I'm just going to assume we get the best bound possible, but in the worst case scenario, it'll still look "something" like this.
Note to self:
\[
x\int_1^\infty \log(y)y^{-x}\,dy = -\frac{x}{(x-1)^2}\\
\]
Which you can check for yourself (differentiate the integral \(\int_1^\infty y^{-x} \,dy = \frac{1}{x-1}\)). Then we are left with:
\[
\begin{align}
T(x,2) &= -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
&= -x\int_1^\infty \left(A(\lfloor y \rfloor) +y\log(y) - y\log(y)\right)y^{-x-1}\,dy\\
&= x\int_1^\infty \left(y\log(y) - A(\lfloor y \rfloor)\right) y^{-x-1}\,dy - \frac{x}{(x-1)^2}\\
\end{align}
\]
This expression is holomorphic for \(\Re(x) > 0\).... Considering our assumptions.
We maybe have to do better than this, and I'm starting to remember more of this shit. I think the big problem is figuring out how fast \(A(m)\) grows, and having a tight bound on it's growth. It'll look something like \(y\log(y)\), but it could be an order bigger when using a tight bound. So we might have to use something less clean--which ultimately makes the \(x/(x-1)^2\) a more complicated rational function. I'll keep thinking about this, I'm sure there's something stupid I'm missing, lmaooo.
Almost certainly, it'll atleast be \(C y \log(y)\) for some \(C\) and not \(C=1\). We'll have to be better. A good way to test this, is to check:
\[
\lim_{m\to\infty} \frac{A(m)}{m\log(m)} \to C\\
\]
Which gives us our \(C\). And then we have to check the rate of convergence to this constant, which is a whole rigamorole. The rate it goes to this constant, essentially tells us if it's \(\Re(x) > 0\) analytic continuation, or only a \(\Re(x) > \delta\) for some \(0 < \delta < 1\). This is famously a bitch to do. I'll see if I can dig up some reasonable bounds on \(g(m)\) and \(A(m)\). Might post a reference request on math overflow....
I'm going to change gears for a moment, before ending, and call the following function:
\[
\upsilon(x) = \sum_{m=1}^\infty \sum_{1\le n \le m} \chi_m(n) m^{-x}\\
\]
I think we should work with this, and get \(\Re(x) > 0\) for this first, before tackling the much harder:
\[
g(m) = \sum_{1 \le n \le m} \frac{\log(n)^{\log(m)/\log(n)}}{\log(m)/\log(n)!} \chi_m(n)\\
\]
Even though, Gottfried's \(g(m)\) will be relatively similarly behaved...
EDIT:
Here's the mathoverflow question: https://mathoverflow.net/questions/44023...nting-zeta
I'm also thinking we should start with the zeta function:
\[
\pi(x) = \sum_{m=1}^\infty (-1)^m \Pi(m) m^{-x}\\
\]
I believe that \(q(m) = \Pi(m) + O(\log(m))\), and that \(\Pi(m)\) is actually a really good estimate. And then we'll be a step closer to getting a tight bound of \(g(m) = \log(m)\Pi(m)\). I think this bound is much better than I originally guessed. I'm going to have to crack my knuckles and get working. This is super interesting! God, nice mileage on this problem! Gottfried's got awesome questions!
Posts: 51
Threads: 6
Joined: Feb 2023
(02/06/2023, 05:49 AM)JmsNxn Wrote: Yes, I've definitely only seen radicals in the sense of ideals; or in the sense of Ring Theory. I never really graduated far enough into Algebraic Geometry--but a lot of the results are generalizations of things from Ring Theory, and the like. I've only ever studied these things in context of zeta functions--as they can be used to create crazy zeta functions, and then prove results about the natural numbers/primes. And that's about the amount I've used it.
I honestly don't think we will need all the fancy stuff to analytically continue Gottfried's function. A lot of "analytically continuing zeta function" talk is just getting decently behaved asymptotics on the coefficients of the zeta function. But then, the actual details and discussions of the object, would be trying to prove where the zeroes are, to get more accurate/better asymptotic information on the coefficients.
I don't think Gottfried really cares too much about that--and more so, it seems he's just interesting in showing the zeta function is analytic. Which I've gotten for \(\Re(s) > 1\)--getting to \(0 < \Re(s)\) should be pretty easy; as we are just speeding up the sum to get it to converge for \(0 < \Re(s)\). Getting it for \(s \in \mathbb{C}\) is much more difficult though--and is usually accomplished with a reflection formula. Reflection formulas are egregiously hard, and found on case to case bases--but they all tend to follow the same pattern Riemann/Dirichlet laid out.
We might get lucky though, and find something about submultiplicative arithmetic functions, where \(\tau(ab) \le \tau(a) \tau(b)\) where \(a\) and \(b\) are coprime, but I'm not sure. Either way, I'm confident we can get this to \(\Re(s) > 0\); as this analytic continuation is pretty standard and we have a good bound of \(|g(m)| \le \log(m) |\sqrt{\mathcal{I}_m|}\) which has a pretty modest growth...
I think I'll ask a question on mathoverflow-- because this looks very similar to somethings I've seen before. But I don't remember the names of these things (been 3 years since I've been in an analytic number theory class or read a book on it). But I'm going to go through some of my number theory books, to see if something sparks a memory...
I'm going to do the run through for a sanity check...
write:
\[
\begin{align}
A(m) & =\sum_{1\le n \le m} g(n)\\
\sum_{1 \le m \le t} g(m)m^{-x} &= A(t)t^{-x} -x\int_1^t A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\end{align}
\]
Taken straight from:
https://en.wikipedia.org/wiki/Abel%27s_s...on_formula
So, we know the growth of \(A(t)\) is probably about \(t\log(t)\), so for \(\Re(x)>1\) we have the expression:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
Now this is only good for \(\Re(x) > 1\), but it gives us a new expression to work with, which can be analytically continued. I remember how to do this with the zeta function, but it's escaping me at the moment on how to do this with Dirichlet series (of which our function is more similar to)...
Just read this to refresh my memory... https://math.colorado.edu/~rohi1040/expo...alytic.pdf
Okay, so bit of a curve ball; we need to do a few more steps, and I think we have to work with \(\chi_m(n)\) a bit more vigilantly. For the moment, I'm going to assume that:
\[
A(\lfloor y \rfloor) \le \lfloor y \rfloor \log \lfloor y \rfloor\\
\]
So, then:
\[
|T(x,2)| \le |x \int_1^\infty (y \log y - \lfloor y \rfloor \log \lfloor y \rfloor) y^{-x-1}\,dy + \frac{x}{(x-1)^2}|
\]
The expression on the right converges for \(\Re(x) > 0\), but I'm not remembering how to do this to make \(T(x,2)\) bounded by this for more than \(\Re(x) >1\). I know there's a way, I'm just not remembering. FFS. I'll keep reading, lmao.
Edit: Nvm, I'm an idiot. The equation I conjecture to analytically continue for \(\Re(x) > 0\) is pretty simple. I'll walk you through it.
We start with:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
We assume \(A(\lfloor y \rfloor ) \le y \log(y)\). The problem we need is that we need a tight bound on this so that:
\[
A(\lfloor y \rfloor) - y \log(y) = O(1)\\
\]
If this bound doesn't work, I wouldn't be surprised if it's something like \(y^{1+1/2}\log(y)\). Or something like this, which will be tight enough. We have a bit of leeway here, I'm just going to assume we get the best bound possible, but in the worst case scenario, it'll still look "something" like this.
Note to self:
\[
x\int_1^\infty \log(y)y^{-x}\,dy = -\frac{x}{(x-1)^2}\\
\]
Which you can check for yourself (differentiate the integral \(\int_1^\infty y^{-x} \,dy = \frac{1}{x-1}\)). Then we are left with:
\[
\begin{align}
T(x,2) &= -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
&= -x\int_1^\infty \left(A(\lfloor y \rfloor) +y\log(y) - y\log(y)\right)y^{-x-1}\,dy\\
&= x\int_1^\infty \left(y\log(y) - A(\lfloor y \rfloor)\right) y^{-x-1}\,dy - \frac{x}{(x-1)^2}\\
\end{align}
\]
This expression is holomorphic for \(\Re(x) > 0\).... Considering our assumptions.
We maybe have to do better than this, and I'm starting to remember more of this shit. I think the big problem is figuring out how fast \(A(m)\) grows, and having a tight bound on it's growth. It'll look something like \(y\log(y)\), but it could be an order bigger when using a tight bound. So we might have to use something less clean--which ultimately makes the \(x/(x-1)^2\) a more complicated rational function. I'll keep thinking about this, I'm sure there's something stupid I'm missing, lmaooo.
Almost certainly, it'll atleast be \(C y \log(y)\) for some \(C\) and not \(C=1\). We'll have to be better. A good way to test this, is to check:
\[
\lim_{m\to\infty} \frac{A(m)}{m\log(m)} \to C\\
\]
Which gives us our \(C\). And then we have to check the rate of convergence to this constant, which is a whole rigamorole. The rate it goes to this constant, essentially tells us if it's \(\Re(x) > 0\) analytic continuation, or only a \(\Re(x) > \delta\) for some \(0 < \delta < 1\). This is famously a bitch to do. I'll see if I can dig up some reasonable bounds on \(g(m)\) and \(A(m)\). Might post a reference request on math overflow....
I'm going to change gears for a moment, before ending, and call the following function:
\[
\upsilon(x) = \sum_{m=1}^\infty \sum_{1\le n \le m} \chi_m(n) m^{-x}\\
\]
I think we should work with this, and get \(\Re(x) > 0\) for this first, before tackling the much harder:
\[
g(m) = \sum_{1 \le n \le m} \frac{\log(n)^{\log(m)/\log(n)}}{\log(m)/\log(n)!} \chi_m(n)\\
\]
Even though, Gottfried's \(g(m)\) will be relatively similarly behaved...
EDIT:
Here's the mathoverflow question: https://mathoverflow.net/questions/44023...nting-zeta
I'm also thinking we should start with the zeta function:
\[
\pi(x) = \sum_{m=1}^\infty (-1)^m \Pi(m) m^{-x}\\
\]
I believe that \(q(m) = \Pi(m) + O(\log(m))\), and that \(\Pi(m)\) is actually a really good estimate. And then we'll be a step closer to getting a tight bound of \(g(m) = \log(m)\Pi(m)\). I think this bound is much better than I originally guessed. I'm going to have to crack my knuckles and get working. This is super interesting! God, nice mileage on this problem! Gottfried's got awesome questions! I have not looked in great detail at all the work you have done on this problem, but it seems really quite interesting. Excuse me for the simple question, but I am not well versed in number theory-- is the general idea for analytical continuation to obtain sufficently simple and tight bounds on \( A(x) \) so that \( A(x) \) minus the bound is small enough to be integrated, and then the integral over the subtracted bound can be analytically continued as well?
Also, I should add that from my own look into this series, it appears to have a pretty nasty natural boundary at \( \mathfrak{R}(s) =1 \). In my own eyes, it looks almost modular-form-like, except with the poles on the boundary have an extra 'tail' at some fixed angle that contain more poles. Anyway, if this type of argument is correct, it might be more natural to look at the function after applying the map \( z \to -i(z-1) \). Then you can study the function on the upper half plane which may make it more likely you run into some sort of familiar number theory object. Anyway, I look forward to what you come up with-- I'd love to see the number-theoretic significance of this function since the methods I tend to use don't usual to reveal those structures.
Posts: 1,214
Threads: 126
Joined: Dec 2010
02/08/2023, 04:32 AM
(This post was last modified: 02/08/2023, 04:52 AM by JmsNxn.)
(02/06/2023, 09:44 PM)Caleb Wrote: I have not looked in great detail at all the work you have done on this problem, but it seems really quite interesting. Excuse me for the simple question, but I am not well versed in number theory-- is the general idea for analytical continuation to obtain sufficently simple and tight bounds on \( A(x) \) so that \( A(x) \) minus the bound is small enough to be integrated, and then the integral over the subtracted bound can be analytically continued as well?
Also, I should add that from my own look into this series, it appears to have a pretty nasty natural boundary at \( \mathfrak{R}(s) =1 \). In my own eyes, it looks almost modular-form-like, except with the poles on the boundary have an extra 'tail' at some fixed angle that contain more poles. Anyway, if this type of argument is correct, it might be more natural to look at the function after applying the map \( z \to -i(z-1) \). Then you can study the function on the upper half plane which may make it more likely you run into some sort of familiar number theory object. Anyway, I look forward to what you come up with-- I'd love to see the number-theoretic significance of this function since the methods I tend to use don't usual to reveal those structures.
Well, that is a difficult question to answer Caleb! But I will say that, tight bounds on your function \(A(x)\) will translate to tight bounds on the zeta function; that's a given. The manner I employed in that post, is just one of many ways to analytically continue zeta functions. It dates to Riemann, and Dirichlet, most notably. There are definitely more sophisticated ways that people use now--but similar ideas are used. Usually more clever ideas. I'm kind of just throwing shit at the wall and seeing what sticks.
And yes, I absolutely believe that this function is meromorphic in a neighborhood of \(\Re(s) = 1\), and going to be a very ugly meromorphic. If it's a boundary at \(\Re(s) = 1\), then that's cool too! A boundary to me, means that there are a dense number of singularities on \(1 + i\mathbb{R}\). Then that means something super super coool!
It would be mean the tight bound is something like \(x\)-- which would be absolutely bananas! By, this, I'm going to run you through it
\[
\nu(s) = -s \int_1^\infty A(x)x^{-s-1}\,dx\\
\]
Let's assume there is a WALL of singularities at \(\Re(s) = 1\) (as opposed to me just thinking there are a bunch of singularities). Now let's pull out the oldest trick in the book in analytic number theory! This is known as Perron's formula https://en.wikipedia.org/wiki/Perron%27s_formula, which helps us analytically express \(A(x)\) from \(\nu\).
Let \(c>1\), then
\[
A(x) = \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \frac{\nu(s)x^{s}}{s}\,ds\\
\]
So if we can let \(c \to 1\) with no poles, then for all \(\epsilon > 0\):
\[
|A(x)| \le O(x^{1+\epsilon})
\]
So far everything is good here--and I can prove this. The thing is, if there is a WALL of singularities at \(1 + i\mathbb{R}\), then this is what's known as a TIGHT bound, because you cannot do any better. Which would mean the tight bound on \(A(x)\) is \(O(x)\). This could definitely help us a lot!!! (Remember \(\nu(s)\) isn't the function we care about, it's the lame version of Gottfried's function!). But this would mean that \(\nu(s)\) is landlocked to \(\Re(s) > 1\).
Normally, to get a better bound, say by moving \(\sigma < c < 1\), we get:
\[
A(x) = \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \frac{\nu(s)x^{s}}{s}\,ds - \sum \text{Res}\\
\]
And we pray the residues have nice asymptotics in \(x\)...
Fun fact! This is how you prove the equivalence of Mertens conjecture to the Riemann hypothesis https://en.wikipedia.org/wiki/Mertens_function. If the function:
\[
M(x) = \sum_{n \le x} \mu(n) = O(\sqrt{n})\\
\]
Where:
\[
\begin{align}
\frac{1}{\zeta(s)} &= \sum_{n=1}^\infty \mu(n) n^{-s}\\
&= -s\int_1^\infty M(x)x^{-s-1}\,dx\\
\end{align}
\]
Then Riemann's hypothesis is true. Because:
\[
M(x) = \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \frac{x^{s}}{\zeta(s)s}\,ds
\]
If we can move \(c \to 1/2\) without hitting any poles in \(1/\zeta(s)\)--then \(M(x)\) is bounded by \(\sqrt{x}\). But this just means, \(\zeta\) has no zeroes for \(1/2 < \Re(s) < 1\)--which is the Riemann Hypothesis, lmao.
A lot of this stuff can be very counterintuitive, and confusing as fuck. The complex analysis is usually what trips people up, because it's very non-conventional. And tends to look like magic. But a lot of it just follows somewhat basic building blocks, but they add together in very very weird (kind of like quantum physics, basic building blocks, but once you add it all up it adds up in a really weird way). It can get difficult to remember the rules.
Also, I'm out of practice. I've been doing so much complex dynamics, that I'm having to refresh my memories on so many things (I know it looks something like this, but I don't know the fine details, lol).
Thanks for your comments! Excited that you are excited! I should probably run some numerical trials myself, just haven't bothered, lmao!
Regards, James
EDIT:
Just realized you may be referring to Gottfried's function, when you talked about the wall of singularities, and not \(\nu\). Do you mind sharing more details? Are you using my change of variables \(x \mapsto -x\)? By this, when you say wall of singularities at \(\Re(s) = 1\) are you talking about \(\sum n^{n^s}\) or \(\sum n^{n^{-s}}\)? If there's a wall of singularities at \(\Re(s) =1\) for the latter case, the best result I can pull out is:
\[
\sum_{m\le x} \sum_{n\le m} \frac{\log^{\frac{\log(m)}{\log(n)}}(n)}{\frac{\log(m)}{\log(n)}!}\chi_m(n) = O(x)\\
\]
And we may be shit out of luck for a further analytic continuation.... But, that's still a fairly non-trivial result... To be honest, I don't think I've ever heard of zeta functions having walls of singularities; or natural boundaries in this manner. More typically than not, the sums and numerical procedures start to spaz out, and what looks like chaos is just chaos in your calculator, not the actual platonic form of the function. So I doubt it's a wall; probably just mad singularities (I.e. dense versus just a lot of singularities).
EDIT:
I'm pretty sure that zeta functions cannot have "walls of singularities" in the same way we see them in dynamics, where a natural boundary forms. At least, not if we ask that the coefficients of the zeta function \(f(n)\) satisfy \(\sum_{n\le x} f(n) = O(x)\)--what's far more likely is that it's just A LOT of singularities, and that this causes the calculator to spaz out and make it look more chaotic than it is. I will write my own calculator for Gottfried's function and report back
Posts: 51
Threads: 6
Joined: Feb 2023
I haven't read the whole post yet in detail (I'd like to take some time to think about the ideas you bring up), but as quick note, let me say a few things. First, the natural boundary I was talking about is the dense set of singularities definition-- a line at which the function fails to be able to be analytically continued since the singularities are so plentiful there is no path through them (i.e. in every open interval on the line of any size there is a singularity).
I haven't gone through and proved the function is not meromorphic, but, in case you didn't already know-- "most" functions have a natural boundary! For instance, if you pick a random Taylor series, it will have natural boundary at the unit circle (with probability 1). A similar result holds for Dirichlet series, if you pick one with random coefficents it will have a natural boundary on some line. In fact, (Polya?) has a theorem that has changed my view forever. If you pick a power series with integer coeffiencts, it is either a rational function or has a natural boundary! This is pretty absurd-- my own 'corollary' is that "A power series with integer coeffiecents is either interesting or doesn't have a natural boundary." Nowadays, whenever I see a function with lots of singularities, I pretty much just automatically assume its got a natural boundary.
However, I do actually have some reasons to believe Gottfried's function has a natural boundary. Consider instead the series
\[F_x(z)= \sum (-1)^n z^{n^x}\]
I write the series with x as a subscript to emphasize that I fix x and think of z as varying. For integer x>1 this function actually has a natural boundary on the unit circle, but something called the Fabry gap theorem, which says that when the coeffiecents are integers growing infinitely faster than \(n\) you end up with a function with a natural boundary. Notice that at x=2 \( F_2(z)\) is basically the theta function, so its pretty much a modular form. One also has that
\[G_x(z)= \sum (-1)^n z^{\lfloor (n^x) \rfloor}\]
has a natural boundary for all real numbers x>1. Even without the floor being applied, I'd say its a safe bet that it should still have a natural boundary. Basically, if the coeffiecnts grow too fast, you should expect it to have a natural boundary. Now, Gottfrieds function
\[ \sum (-1)^n n^{n^x}\]
grows even faster, and than the other functions, so it probably should have a natural boundary. In fact, the line \( \mathfrak{R}(x)>1 \) is right where the growth of the coefficents graduates beyond even factorial growth-- it becomes too fast growing for Borel transformations to work. I've never heard of a function with such fast growth rate that doesn't have a natural boundary-- I don't know if such a thing is even possible. So anyway, Gottfried's function probably has a natural boundary, or at least, I would be highly suprised if it didn't have a boundary there.
Posts: 1,214
Threads: 126
Joined: Dec 2010
(02/08/2023, 05:28 AM)Caleb Wrote: I haven't read the whole post yet in detail (I'd like to take some time to think about the ideas you bring up), but as quick note, let me say a few things. First, the natural boundary I was talking about is the dense set of singularities definition-- a line at which the function fails to be able to be analytically continued since the singularities are so plentiful there is no path through them (i.e. in every open interval on the line of any size there is a singularity).
I haven't gone through and proved the function is not meromorphic, but, in case you didn't already know-- "most" functions have a natural boundary! For instance, if you pick a random Taylor series, it will have natural boundary at the unit circle (with probability 1). A similar result holds for Dirichlet series, if you pick one with random coefficents it will have a natural boundary on some line. In fact, (Polya?) has a theorem that has changed my view forever. If you pick a power series with integer coeffiencts, it is either a rational function or has a natural boundary! This is pretty absurd-- my own 'corollary' is that "A power series with integer coeffiecents is either interesting or doesn't have a natural boundary." Nowadays, whenever I see a function with lots of singularities, I pretty much just automatically assume its got a natural boundary.
However, I do actually have some reasons to believe Gottfried's function has a natural boundary. Consider instead the series
\[F_x(z)= \sum (-1)^n z^{n^x}\]
I write the series with x as a subscript to emphasize that I fix x and think of z as varying. For integer x>1 this function actually has a natural boundary on the unit circle, but something called the Fabry gap theorem, which says that when the coeffiecents are integers growing infinitely faster than \(n\) you end up with a function with a natural boundary. Notice that at x=2 \( F_2(z)\) is basically the theta function, so its pretty much a modular form. One also has that
\[G_x(z)= \sum (-1)^n z^{\lfloor (n^x) \rfloor}\]
has a natural boundary for all real numbers x>1. Even without the floor being applied, I'd say its a safe bet that it should still have a natural boundary. Basically, if the coeffiecnts grow too fast, you should expect it to have a natural boundary. Now, Gottfrieds function
\[ \sum (-1)^n n^{n^x}\]
grows even faster, and than the other functions, so it probably should have a natural boundary. In fact, the line \( \mathfrak{R}(x)>1 \) is right where the growth of the coefficents graduates beyond even factorial growth-- it becomes too fast growing for Borel transformations to work. I've never heard of a function with such fast growth rate that doesn't have a natural boundary-- I don't know if such a thing is even possible. So anyway, Gottfried's function probably has a natural boundary, or at least, I would be highly suprised if it didn't have a boundary there.
AHA!
You did not apply my change of variables, as I suspected!!!!!!
I am looking at:
\[
\sum n^{n^{-s}}\\
\]
Which is where I started by "zeta" function talk! Gottfrieds summation formula does not hold for this function, my function is simply a holomorphic function, which formally satisfies the same infinite series manipulations that Gottfried's would if it were entire, or we're in a perfect world.
So my function is written as:
\[
P(s) = \sum_{m=1}^\infty g(m) m^{-s} = T(-s,2)\\
\]
The convergence of my zeta function, is for \(\Re(s) > 1\)--but in your/Gottfried's language, this means it converges for \(\Re(s) < -1\). My main goal has been to analytically continue Gottfried's function--and I started by observing the partial sums:
\[
P_N(s) = \sum_{m=1}^N g(m) m^{-s}\\
\]
Which look A LOT like:
\[
\sum_{n=1}^N n^{n^{-s}}\\
\]
I suspect entirely that Gottfried's function diverges on the boundary--but numerically analysing that is not the best way, in my opinion. And as much as I love a lot of Polya's work; it's also meant to be taken with a grain of salt. If I can write a continuation to the right half plane, which converges for \(\Re(s) < -1\), and these coefficients grow like \(O(log(m))\) (I just ran some code to double check my assumptions, and I was right on how we should expect the zeta coefficients to grow).
I think our mix up is you are using Gottfried's function. I am changing the variables \(s \mapsto -s\)--and then I am finding that this is actually a zeta function. And then I am wondering if we can analytically continue the zeta function. Just because it "looks" like a natural boundary, does not mean it is one. But that'd be super cool if it is  . That would mean we have TWO distinct functions, one holomorphic for \(\Re(s) > 1\) and one holomorphic for \(\Re(s) < -1\); and there's a bunch of mystery there. Despite both "formally" equaling Gottfried's series.
Hope that makes sense!
I've caved and started running some code and am currently making graphs as we speak. But graphing in PariGP using Mike3's program is very time consuming. It is super accurate, but very time consuming!
Posts: 1,214
Threads: 126
Joined: Dec 2010
02/08/2023, 09:21 AM
(This post was last modified: 02/08/2023, 01:02 PM by JmsNxn.)
As I'm waiting for the detailed graphs of \(\nu(s)\) and \(T(-s,2) = P(s)\)... I'll write some graphs on growth.
We recall that:
\[
\begin{align}
q(m) &= (-1)^m\sum_{n \le m} \chi_{m}(n)\\
Q(x) &= \sum_{m \le \lfloor x \rfloor} q(m)\\
\nu(s) &= \sum_{m=1}^\infty q(m) m^{-s}\\
&= -s\int_1^\infty Q(x)x^{-s-1}\,dx\\
\end{align}
\]
Then, we're going to graph \(Q\) in the mean time:
This is a graph of \(Q(x)\) for \(x=1\) up to \(x=1000\). Notably, it never breaches the value \(|Q(x)| > 8\). This function has at most a growth of \(\log(x)\) as I predicted  But it could even be \(O(1)\). The growth \(\log(x)\) or something close, means that \(\nu(s)\) is holomorphic for \(\Re(s) > 0\).
The second function is Gottfried's zeta function, and the growth of its coefficients.
\[
\begin{align}
g(m) &= (-1)^m\sum_{n \le m} \frac{\log(n)^{\frac{\log(m)}{\log(n)}}}{\frac{\log(m)}{\log(n)}!}\chi_{m}(n)\\
A(x) &= \sum_{m \le \lfloor x \rfloor} g(m)\\
P(s) &= \sum_{m=1}^\infty g(m) m^{-s}\\
&= -s\int_1^\infty A(x)x^{-s-1}\,dx\\
\end{align}
\]
The next graph is \(A(x)\) over \(x=1\) upto \(x=1000\), and you will again notice that it is about \(\log(x)\) in growth!
[To be posted when this shit finishes compiling. It is taking wayyyyyyyyy longer than I thought it would. And I don't want to lose this post. Either way, it's going to look like \(Q(x)\), as my calculator is saying. The value \(Q(x)\) is only a little off from \(A(x)\). But the run time to get \(A(x)\) is much longer...]
EDIT: As soon as I wrote that disclaimer, the program compiled 30 mins later:
This is exactly \(\log(x)\). It doesn't break \(|A(x)| > 8\)--it's just a tad more chaotic.
Here is the basic code I am using:
zeta2.gp (Size: 6.4 KB / Downloads: 444)
Open it in a text editor before compiling and running. The read me is in the comments of the source.
Please observe, what I mean is that:
\[
\sup_{0 \le x \le X} |A(x)| \le C\log(X)\\
\]
And, the example I used is:
\[
\sup_{0 \le x \le 1000} |A(x)| \le 8\\
\]
Where at worst, we grow just as slow.........
Posts: 51
Threads: 6
Joined: Feb 2023
(02/08/2023, 05:52 AM)JmsNxn Wrote: I think our mix up is you are using Gottfried's function. I am changing the variables \(s \mapsto -s\)--and then I am finding that this is actually a zeta function. And then I am wondering if we can analytically continue the zeta function. Just because it "looks" like a natural boundary, does not mean it is one. But that'd be super cool if it is . That would mean we have TWO distinct functions, one holomorphic for \(\Re(s) > 1\) and one holomorphic for \(\Re(s) < -1\); and there's a bunch of mystery there. Despite both "formally" equaling Gottfried's series.
Are you proposing that the change of variables is actually doing something to effect the convergence? I had not been making an distinction about changing variables, since I was under the impression that to go from Gottfrieds function to your function is a simple matter of replacing s with -s. Is Gottfried's function not simply a zeta function in the variable -s? I'm aware its possible that sometimes formally changing a variable can end up leading to a different function (for instance, maybe it ends up on a different branch), is that what your saying is happening here? Also, the extension I have for Gottfrieds function \( \sum (-1)^n n^{n^x} \) goes up to \( \Re(s) < 1 \) not \( \Re(s) < -1 \). This should correspond to \( \Re(s) > -1 \) for your function. I suspect you should find much more choas as you get towards that line
Posts: 1,214
Threads: 126
Joined: Dec 2010
02/10/2023, 02:47 AM
(This post was last modified: 02/10/2023, 03:18 AM by JmsNxn.)
(02/08/2023, 08:40 PM)Caleb Wrote: (02/08/2023, 05:52 AM)JmsNxn Wrote: I think our mix up is you are using Gottfried's function. I am changing the variables \(s \mapsto -s\)--and then I am finding that this is actually a zeta function. And then I am wondering if we can analytically continue the zeta function. Just because it "looks" like a natural boundary, does not mean it is one. But that'd be super cool if it is . That would mean we have TWO distinct functions, one holomorphic for \(\Re(s) > 1\) and one holomorphic for \(\Re(s) < -1\); and there's a bunch of mystery there. Despite both "formally" equaling Gottfried's series.
Are you proposing that the change of variables is actually doing something to effect the convergence? I had not been making an distinction about changing variables, since I was under the impression that to go from Gottfrieds function to your function is a simple matter of replacing s with -s. Is Gottfried's function not simply a zeta function in the variable -s? I'm aware its possible that sometimes formally changing a variable can end up leading to a different function (for instance, maybe it ends up on a different branch), is that what your saying is happening here? Also, the extension I have for Gottfrieds function \( \sum (-1)^n n^{n^x} \) goes up to \( \Re(s) < 1 \) not \( \Re(s) < -1 \). This should correspond to \( \Re(s) > -1 \) for your function. I suspect you should find much more choas as you get towards that line
AHHH! I see, I apologize. I was confused!
Then yes we are on the exact same page
I've made some graphs I'll post in a bit, but the zeta series appears to converge for \(0 < \Re(s) \), where I presume we can analytically continue using Gottfried's expansion. I apologize!
If you are saying that:
\[
\sum (-1)^n n^{n^{-s}}
\]
converges for \(\Re(s) > -1\) we are on the same page. I actually haven't looked at this sum, I just instantly wrote it as a zeta function and only looked at that--so I wasn't sure where the original series converged.
It also appears the zeroes are on \(\Re(s) = 1/2\) on the critical strip... having Riemann Hypothesis vietnam flashbacks
So I accidentally closed one of the graphs mid compile, but the other one finished. So here is:
\[
\nu(s) = \sum_{m=1}^\infty q(m) m^{-s}\\
\]
Where:
\[
q(m) = (-1)^m \sum_{n \le m} \chi_m(n)\\
\]
And \(\chi_m(n) =1\) if and only if \(n^k = m\) for some \(k \in \mathbb{N}\).
Then here is a graph from:
\(0.5 \le \Re(s) \le 5.5\) and \(|\Im(s)| \le 2.5\):
Didn't finish the graph for gottfried's function, it only finished most of the top half, so I'm going to recompile. I am also graphing the critical strip for both, to see what that could look like.
|