Yes, I've definitely only seen radicals in the sense of ideals; or in the sense of Ring Theory. I never really graduated far enough into Algebraic Geometry--but a lot of the results are generalizations of things from Ring Theory, and the like. I've only ever studied these things in context of zeta functions--as they can be used to create crazy zeta functions, and then prove results about the natural numbers/primes. And that's about the amount I've used it.
I honestly don't think we will need all the fancy stuff to analytically continue Gottfried's function. A lot of "analytically continuing zeta function" talk is just getting decently behaved asymptotics on the coefficients of the zeta function. But then, the actual details and discussions of the object, would be trying to prove where the zeroes are, to get more accurate/better asymptotic information on the coefficients.
I don't think Gottfried really cares too much about that--and more so, it seems he's just interesting in showing the zeta function is analytic. Which I've gotten for \(\Re(s) > 1\)--getting to \(0 < \Re(s)\) should be pretty easy; as we are just speeding up the sum to get it to converge for \(0 < \Re(s)\). Getting it for \(s \in \mathbb{C}\) is much more difficult though--and is usually accomplished with a reflection formula. Reflection formulas are egregiously hard, and found on case to case bases--but they all tend to follow the same pattern Riemann/Dirichlet laid out.
We might get lucky though, and find something about submultiplicative arithmetic functions, where \(\tau(ab) \le \tau(a) \tau(b)\) where \(a\) and \(b\) are coprime, but I'm not sure. Either way, I'm confident we can get this to \(\Re(s) > 0\); as this analytic continuation is pretty standard and we have a good bound of \(|g(m)| \le \log(m) |\sqrt{\mathcal{I}_m|}\) which has a pretty modest growth...
I think I'll ask a question on mathoverflow-- because this looks very similar to somethings I've seen before. But I don't remember the names of these things (been 3 years since I've been in an analytic number theory class or read a book on it). But I'm going to go through some of my number theory books, to see if something sparks a memory...
I'm going to do the run through for a sanity check...
write:
\[
\begin{align}
A(m) & =\sum_{1\le n \le m} g(n)\\
\sum_{1 \le m \le t} g(m)m^{-x} &= A(t)t^{-x} -x\int_1^t A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\end{align}
\]
Taken straight from:
https://en.wikipedia.org/wiki/Abel%27s_s...on_formula
So, we know the growth of \(A(t)\) is probably about \(t\log(t)\), so for \(\Re(x)>1\) we have the expression:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
Now this is only good for \(\Re(x) > 1\), but it gives us a new expression to work with, which can be analytically continued. I remember how to do this with the zeta function, but it's escaping me at the moment on how to do this with Dirichlet series (of which our function is more similar to)...
Just read this to refresh my memory... https://math.colorado.edu/~rohi1040/expo...alytic.pdf
Okay, so bit of a curve ball; we need to do a few more steps, and I think we have to work with \(\chi_m(n)\) a bit more vigilantly. For the moment, I'm going to assume that:
\[
A(\lfloor y \rfloor) \le \lfloor y \rfloor \log \lfloor y \rfloor\\
\]
So, then:
\[
|T(x,2)| \le |x \int_1^\infty (y \log y - \lfloor y \rfloor \log \lfloor y \rfloor) y^{-x-1}\,dy + \frac{x}{(x-1)^2}|
\]
The expression on the right converges for \(\Re(x) > 0\), but I'm not remembering how to do this to make \(T(x,2)\) bounded by this for more than \(\Re(x) >1\). I know there's a way, I'm just not remembering. FFS. I'll keep reading, lmao.
Edit: Nvm, I'm an idiot. The equation I conjecture to analytically continue for \(\Re(x) > 0\) is pretty simple. I'll walk you through it.
We start with:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
We assume \(A(\lfloor y \rfloor ) \le y \log(y)\). The problem we need is that we need a tight bound on this so that:
\[
A(\lfloor y \rfloor) - y \log(y) = O(1)\\
\]
If this bound doesn't work, I wouldn't be surprised if it's something like \(y^{1+1/2}\log(y)\). Or something like this, which will be tight enough. We have a bit of leeway here, I'm just going to assume we get the best bound possible, but in the worst case scenario, it'll still look "something" like this.
Note to self:
\[
x\int_1^\infty \log(y)y^{-x}\,dy = -\frac{x}{(x-1)^2}\\
\]
Which you can check for yourself (differentiate the integral \(\int_1^\infty y^{-x} \,dy = \frac{1}{x-1}\)). Then we are left with:
\[
\begin{align}
T(x,2) &= -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
&= -x\int_1^\infty \left(A(\lfloor y \rfloor) +y\log(y) - y\log(y)\right)y^{-x-1}\,dy\\
&= x\int_1^\infty \left(y\log(y) - A(\lfloor y \rfloor)\right) y^{-x-1}\,dy - \frac{x}{(x-1)^2}\\
\end{align}
\]
This expression is holomorphic for \(\Re(x) > 0\).... Considering our assumptions.
We maybe have to do better than this, and I'm starting to remember more of this shit. I think the big problem is figuring out how fast \(A(m)\) grows, and having a tight bound on it's growth. It'll look something like \(y\log(y)\), but it could be an order bigger when using a tight bound. So we might have to use something less clean--which ultimately makes the \(x/(x-1)^2\) a more complicated rational function. I'll keep thinking about this, I'm sure there's something stupid I'm missing, lmaooo.
Almost certainly, it'll atleast be \(C y \log(y)\) for some \(C\) and not \(C=1\). We'll have to be better. A good way to test this, is to check:
\[
\lim_{m\to\infty} \frac{A(m)}{m\log(m)} \to C\\
\]
Which gives us our \(C\). And then we have to check the rate of convergence to this constant, which is a whole rigamorole. The rate it goes to this constant, essentially tells us if it's \(\Re(x) > 0\) analytic continuation, or only a \(\Re(x) > \delta\) for some \(0 < \delta < 1\). This is famously a bitch to do. I'll see if I can dig up some reasonable bounds on \(g(m)\) and \(A(m)\). Might post a reference request on math overflow....
I'm going to change gears for a moment, before ending, and call the following function:
\[
\upsilon(x) = \sum_{m=1}^\infty \sum_{1\le n \le m} \chi_m(n) m^{-x}\\
\]
I think we should work with this, and get \(\Re(x) > 0\) for this first, before tackling the much harder:
\[
g(m) = \sum_{1 \le n \le m} \frac{\log(n)^{\log(m)/\log(n)}}{\log(m)/\log(n)!} \chi_m(n)\\
\]
Even though, Gottfried's \(g(m)\) will be relatively similarly behaved...
EDIT:
Here's the mathoverflow question: https://mathoverflow.net/questions/44023...nting-zeta
I'm also thinking we should start with the zeta function:
\[
\pi(x) = \sum_{m=1}^\infty (-1)^m \Pi(m) m^{-x}\\
\]
I believe that \(q(m) = \Pi(m) + O(\log(m))\), and that \(\Pi(m)\) is actually a really good estimate. And then we'll be a step closer to getting a tight bound of \(g(m) = \log(m)\Pi(m)\). I think this bound is much better than I originally guessed. I'm going to have to crack my knuckles and get working. This is super interesting! God, nice mileage on this problem! Gottfried's got awesome questions!
I honestly don't think we will need all the fancy stuff to analytically continue Gottfried's function. A lot of "analytically continuing zeta function" talk is just getting decently behaved asymptotics on the coefficients of the zeta function. But then, the actual details and discussions of the object, would be trying to prove where the zeroes are, to get more accurate/better asymptotic information on the coefficients.
I don't think Gottfried really cares too much about that--and more so, it seems he's just interesting in showing the zeta function is analytic. Which I've gotten for \(\Re(s) > 1\)--getting to \(0 < \Re(s)\) should be pretty easy; as we are just speeding up the sum to get it to converge for \(0 < \Re(s)\). Getting it for \(s \in \mathbb{C}\) is much more difficult though--and is usually accomplished with a reflection formula. Reflection formulas are egregiously hard, and found on case to case bases--but they all tend to follow the same pattern Riemann/Dirichlet laid out.
We might get lucky though, and find something about submultiplicative arithmetic functions, where \(\tau(ab) \le \tau(a) \tau(b)\) where \(a\) and \(b\) are coprime, but I'm not sure. Either way, I'm confident we can get this to \(\Re(s) > 0\); as this analytic continuation is pretty standard and we have a good bound of \(|g(m)| \le \log(m) |\sqrt{\mathcal{I}_m|}\) which has a pretty modest growth...
I think I'll ask a question on mathoverflow-- because this looks very similar to somethings I've seen before. But I don't remember the names of these things (been 3 years since I've been in an analytic number theory class or read a book on it). But I'm going to go through some of my number theory books, to see if something sparks a memory...
I'm going to do the run through for a sanity check...
write:
\[
\begin{align}
A(m) & =\sum_{1\le n \le m} g(n)\\
\sum_{1 \le m \le t} g(m)m^{-x} &= A(t)t^{-x} -x\int_1^t A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\end{align}
\]
Taken straight from:
https://en.wikipedia.org/wiki/Abel%27s_s...on_formula
So, we know the growth of \(A(t)\) is probably about \(t\log(t)\), so for \(\Re(x)>1\) we have the expression:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
Now this is only good for \(\Re(x) > 1\), but it gives us a new expression to work with, which can be analytically continued. I remember how to do this with the zeta function, but it's escaping me at the moment on how to do this with Dirichlet series (of which our function is more similar to)...
Just read this to refresh my memory... https://math.colorado.edu/~rohi1040/expo...alytic.pdf
Okay, so bit of a curve ball; we need to do a few more steps, and I think we have to work with \(\chi_m(n)\) a bit more vigilantly. For the moment, I'm going to assume that:
\[
A(\lfloor y \rfloor) \le \lfloor y \rfloor \log \lfloor y \rfloor\\
\]
So, then:
\[
|T(x,2)| \le |x \int_1^\infty (y \log y - \lfloor y \rfloor \log \lfloor y \rfloor) y^{-x-1}\,dy + \frac{x}{(x-1)^2}|
\]
The expression on the right converges for \(\Re(x) > 0\), but I'm not remembering how to do this to make \(T(x,2)\) bounded by this for more than \(\Re(x) >1\). I know there's a way, I'm just not remembering. FFS. I'll keep reading, lmao.
Edit: Nvm, I'm an idiot. The equation I conjecture to analytically continue for \(\Re(x) > 0\) is pretty simple. I'll walk you through it.
We start with:
\[
T(x,2) = -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
\]
We assume \(A(\lfloor y \rfloor ) \le y \log(y)\). The problem we need is that we need a tight bound on this so that:
\[
A(\lfloor y \rfloor) - y \log(y) = O(1)\\
\]
If this bound doesn't work, I wouldn't be surprised if it's something like \(y^{1+1/2}\log(y)\). Or something like this, which will be tight enough. We have a bit of leeway here, I'm just going to assume we get the best bound possible, but in the worst case scenario, it'll still look "something" like this.
Note to self:
\[
x\int_1^\infty \log(y)y^{-x}\,dy = -\frac{x}{(x-1)^2}\\
\]
Which you can check for yourself (differentiate the integral \(\int_1^\infty y^{-x} \,dy = \frac{1}{x-1}\)). Then we are left with:
\[
\begin{align}
T(x,2) &= -x\int_1^\infty A(\lfloor y \rfloor)y^{-x-1}\,dy\\
&= -x\int_1^\infty \left(A(\lfloor y \rfloor) +y\log(y) - y\log(y)\right)y^{-x-1}\,dy\\
&= x\int_1^\infty \left(y\log(y) - A(\lfloor y \rfloor)\right) y^{-x-1}\,dy - \frac{x}{(x-1)^2}\\
\end{align}
\]
This expression is holomorphic for \(\Re(x) > 0\).... Considering our assumptions.
We maybe have to do better than this, and I'm starting to remember more of this shit. I think the big problem is figuring out how fast \(A(m)\) grows, and having a tight bound on it's growth. It'll look something like \(y\log(y)\), but it could be an order bigger when using a tight bound. So we might have to use something less clean--which ultimately makes the \(x/(x-1)^2\) a more complicated rational function. I'll keep thinking about this, I'm sure there's something stupid I'm missing, lmaooo.
Almost certainly, it'll atleast be \(C y \log(y)\) for some \(C\) and not \(C=1\). We'll have to be better. A good way to test this, is to check:
\[
\lim_{m\to\infty} \frac{A(m)}{m\log(m)} \to C\\
\]
Which gives us our \(C\). And then we have to check the rate of convergence to this constant, which is a whole rigamorole. The rate it goes to this constant, essentially tells us if it's \(\Re(x) > 0\) analytic continuation, or only a \(\Re(x) > \delta\) for some \(0 < \delta < 1\). This is famously a bitch to do. I'll see if I can dig up some reasonable bounds on \(g(m)\) and \(A(m)\). Might post a reference request on math overflow....
I'm going to change gears for a moment, before ending, and call the following function:
\[
\upsilon(x) = \sum_{m=1}^\infty \sum_{1\le n \le m} \chi_m(n) m^{-x}\\
\]
I think we should work with this, and get \(\Re(x) > 0\) for this first, before tackling the much harder:
\[
g(m) = \sum_{1 \le n \le m} \frac{\log(n)^{\log(m)/\log(n)}}{\log(m)/\log(n)!} \chi_m(n)\\
\]
Even though, Gottfried's \(g(m)\) will be relatively similarly behaved...
EDIT:
Here's the mathoverflow question: https://mathoverflow.net/questions/44023...nting-zeta
I'm also thinking we should start with the zeta function:
\[
\pi(x) = \sum_{m=1}^\infty (-1)^m \Pi(m) m^{-x}\\
\]
I believe that \(q(m) = \Pi(m) + O(\log(m))\), and that \(\Pi(m)\) is actually a really good estimate. And then we'll be a step closer to getting a tight bound of \(g(m) = \log(m)\Pi(m)\). I think this bound is much better than I originally guessed. I'm going to have to crack my knuckles and get working. This is super interesting! God, nice mileage on this problem! Gottfried's got awesome questions!

