Divergent Series and Analytical Continuation (LONG post)
#1
I'm going to try to ask an actually interesting and well-researched question here, so this will be a long post. 

Also, I recommend you read this MO post for context, since it contains lots of background and context for this question.

Since this is a long post, I assume most people won't want to read the whole thing-- if your willing to grant that the method works fairly generally and is useful, I'd recommend
  1. Skim up to 'Some thoughts' on https://mathoverflow.net/questions/43869...tical-cont 
  2. Skim through one example in section (1) and (2) and read the bolded stuff
  3. Read the final thoughts at the end of section (1) and (2)



Background

An enormously useful trick in analyzing an infinite series of the form \( \sum_{n=1}^\infty f(n) \) is to expand \( f(n) \) as its own infinite series, and then swap the order of summation, and sum the inner series by analytical continuation. If you read the linked MO post, then you saw my example where the naive approach of simply replacing the inner series by analytical continuation fails. My solution to this problem was to pick up the residues of the analytical continuation of the inner series. In doing some recent calculations, I've come to realize that my solution doesn't quite work-- but I think its definitely on the right track. Thus, to start, let me illustrate some cases where my theory produces some useful results.

Case 1: Euler-Maclaurin Formula
Let us assume that \( f(n) \) can be represented by a power series at 0. Then \(f(n) = \sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} n^k \). Now, lets try to analyze the series \( \sum_{n=1}^\infty f(n)\). We have that 
\[\sum_{n=1}^\infty f(n)=\sum_{n=1}^\infty \sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!}n^k=\sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} \sum_{n=1}^\infty n^k\]
So far, the only thing I've done that is illegal is swapping the summation order of k and n (but I personally feel that swapping summation is always valid when viewed in the right light). Now, the next step will be doing something very illegal-- replacing the divergent sum \( \sum_{n=1}^\infty n^k \) with \( \zeta(-k) \). Now, if you read the MO post, you know that making this replacement requires picking up the residue of \( \zeta(-k) \). Before, we do that, lets look at what the sum has become. We now have (using the relationship between the zeta function and bernoulli numbers)
\[\sum f(n) = \sum \frac{f^{k}(0)}{k!} \zeta(-k) = -\sum \frac{f^{(k)}(0)}{k!} \frac{B_{k+1}}{k+1} = -\sum_{k=0}^\infty \frac{f^{(2k-1)}(0)}{(2k)!} B_{2k}\]
WOW! This is exactly the Euler-Maclaurn Formula-- except with one missing term, the integral. This missing term is precisely what is picked up by the residue of the \( \zeta \) function. In particular, we can rewrite out sum as the contour integral 
\[\int_{c-i \infty}^{c + i \infty} \frac{1}{e^{2 \pi i k}-1}\frac{f^{(k)}(0)}{k!} \zeta(-k) dk\]
Lo and behold, there is an extra residue at -1, with value exactly equal to \( f^{(-1)}(0) = \int_0^\infty f(x)dx \). Therefore, my approach allows us to easily obtain the missing term that can be missed in a usual divergent series approach to looking at the E-M formula. I don't go over this case in too much detail, since I've already written about it HERE.

Links to some other cases:
I've used this trick in a few other instances to easily obtain results about analytical continuation. For instance, it provides an easy way to figure out that tommy's zeta function has a branch cut See here. Another similar example can be shown to have a branch cut using this same approach See here.

Case 2; The main study: \( \sum (-1)^n x^{2^n} \)
This is a very trick series to work with (if you want more info about other approachs see here). However, using my approach, we get a mind-numbingly easy way to sum this series. First, do the following simplification
\[ \sum (-1)^n e^{\ln(x) 2^n} = \sum_n (-1)^n \sum_k \frac{\ln(x)^k 2^{nk}}{k!} = \sum_k \frac{\ln(x)^k}{k!}\sum_n (-1)^n 2^{nk}\] 
Now, we do the trick where we replace the \(\sum (-1)^n 2^{nk}\) by its analytical continuation, which is \(\frac{1}{1+2^n}\). Now, notice that we introduce a residue at each \( 1+ 2^z = 0 \implies z = \frac{-\pi i + 2 \pi i n}{\ln(2)} \). Also, notice that the residue at \(\frac{1}{1+2^z}\) has a value of \(\frac{2 \pi i}{\ln(2)}\)
Therefore, the sum becomes the contour integral 
\[ \int_{-1/2 - i \infty}^{-1/2 + i \infty} \frac{\ln(x)^k }{k!}\frac{1}{1+2^k}\frac{1}{e^{2 \pi i k}-1} dk\]
or, written as a sum, and letting \( \overline n = \frac{-\pi i + 2 \pi i n}{\ln(2)} \) then
\[ \sum\frac{\ln(x)^k}{k!}  \frac{1}{1+2^k} + \sum_{n=-\infty}^\infty  \frac{2 \pi i}{\ln(2)} \frac{\ln(x)^{\overline{n}}}{(\overline{n})!} \frac{1}{e^{2 \pi i \overline{n}}-1} \]

There's also another interesting aspect of this approach. Take the sum \( \sum (-1)^n x^{a^n} \). If we have \( |a|<1 \) then we actually have that the inner sum \( \sum (-1)^n a^{nk}\) doesn't diverge. That means that we shouldn't pick up the residues. And in fact, we don't need to! In fact, we have the interesting relationship that 
\[\sum_{n=0}^\infty (-1)^n \left(\frac{1}{2}\right)^{z^n}-\sum_{k=0}^\infty \frac{\ln(\frac{1}{2})^k}{k!(1+z^k)} =
\begin{cases}
0 & |z|<1 \\
\sum \text{Res}(f,a_k) & |z|>1 \\
\end{cases}\] 
Where RES is supposed to be the sum over the extra residues that get created (its a bit tedious to write out, but hopefully you get the idea). 

Problems and Analysis of the General Case
I've been using this approach for a while, and it tended to only give correct answers, so I assume it works in a pretty wide array of situations, but I think I'm still missing some theory on when I'm supposed to pick up residues. Unforunately, I don't possess a great deal of mathematical skill, so I will proceed to try to study the general case by looking at a large number of examples. In particular, I will look at studying the relationship between the following pairs of series. (Also, I would be grateful to see anyone's ideas on how to unify these ideas in a mathemtically rigourous, so I don't have to resort to lots of examples to illustrate my points). 

Also, I quick note of notation, I have \( F^P(x) = \sum_{n=0}^\infty f(n)x^n \), \( F^D(x) = \sum_{n=1}^\infty \frac{f(n)}{n^x} \). Where the P is for 'Power Series' and the D for 'Dirichlet Series.'
  1. \[ \sum_{k=1}^\infty \frac{f(k)}{1+k^z} \sim \sum_{n=0}^\infty (F^D(-zn)) \]
  2. \[ \sum_{k=1}^\infty \frac{f(k)}{1-z^k} \sim \sum_{n=0}^\infty (F^P(z^n)-f(0))\]
  3. \[ \sum_{k=0}^\infty \frac{f(k)}{z-k} \sim \sum_{n=0}^\infty \frac{F^D(-n)}{z^{n+1}} \]
  4. \[ \sum_{k=0}^\infty f(k) \zeta(-zk) \sim \sum_{n=1}^\infty F^P(n^z) \]
(1)
Let's consider the simplest possible example, where \( f(k) =1 \). Then we have that \( F^D(-zn) = \zeta(-zn) \). Unforunately, we don't have that either of the series converges at the same time. Also, note that I think the LHS series actually has a natural boundary on the line \( \mathfrak{Re}(z) = 0 \). So, in order to study this in a reasonable way, we will need to add in a factor so that \( \sum_{k=1}^\infty \frac{f(k)}{1+k^z} \) converges everywhere. When this sum converges everywhere, I will refer to it as the canonical extension of \( \sum_{k=1}^\infty \frac{f(k)}{1+k^z} \). I call it an extension because if I give the \( \sum_{k=1}^\infty \frac{f(k)}{1+k^z} \) for \( \mathfrak{Re}(z) > 0 \) then we can't analytically continue this function to \( \mathfrak{Re}(z) < 0 \). However, the most natural extension is to simply evaulate the sum on the other side of the natural boundary, and so I call this natural-seeming extension that canonical extension. I'll come back to \( f(k) = 1 \) later, but for now, I'll study the easier case of \( f(k) =\frac{1}{k^2} \), so that the series  \( \sum_{k=1}^\infty \frac{f(k)}{1+k^z} \) converges absolutely and everywhere. 

So, lets do some algebra. We have that
\[ (\star_1) \quad \sum_{k=1}^\infty \frac{1}{k^2}\frac{1}{1+k^z} = \sum_{k=1}^\infty \frac{1}{k^2} \sum_{n=0}^\infty (-1)^n k^{zn} \]
\[\sum_{k=1}^\infty  \frac{1}{k^2} \sum_{n=0}^\infty (-1)^n  k^{zn} =   \sum_{n=0}^\infty(-1)^n  \sum_{k=1}^\infty  k^{zn-2}\]
\[(\star_2) \quad \sum_{n=0}^\infty (-1)^n \sum_{k=1}^\infty  k^{zn-2} = \sum_{n=0}^\infty (-1)^n \zeta(-zn+2)\]
In both \(\star_1\) and \(\star_2\) I preform an illegal step where I replace a divergent series by its analaytical continuation. If my theory were correct, then in both steps we need to pick up the residues. However, it appears only \( \star_2 \) creates any new residues. Note that we only need to pick up a residue when \( \mathfrak{Re}(z)> 0 \), since otherwise the the Dirichlet series is not divergent. Doing some computations, we get that 
\[\sum_{k=1}^\infty \frac{1}{k^2}\frac{1}{1+k^z} - \sum_{n=0}^\infty (-1)^n \zeta(-zn+2) = 0\]
When \( \mathfrak{Re}(z) < 0 \). Note that the series on the RHS doesn't actually quite converge, since it alternates between 1 and -1, but if we do cesaro summation or anything else of the same sort we get that the sums are equal. However, when \( \mathfrak{Re}(z) > 0 \) we get that their difference is exactly equal to the residues introduced by \( \zeta(-zn+2)\). Note that there is a residue at \( n = 1/z\) and \( n = + \infty \). As an aside, that residue at \( +\infty \) is a bit harder to compute directly than a regular residue. But, there's an easy way to see that its there (and to compute its contribution). If we take z as an even integer \( z=2k \), then \( \zeta(-(2k)n + 2) = 0\). So, the contour integral 
\[ \int_{c - i \infty}^{c + i \infty} \zeta(-(2k)n + 2) \csc(\pi n) dn \] 
Doesn't enclose any residues (for c sufficently large), so it should be zero-- right? Well actually, the contour integral doesn't evaluate to zero, and that's because the contour integral picks up the residue at \( +\infty\) which is non-zero. 

Thus, for \( \mathfrak{Re}(z) > 0 \), we actually have 
\[ \sum_{k=1}^\infty \frac{1}{k^2}\frac{1}{1+k^z} - \sum_{n=0}^\infty (-1)^n \zeta(-zn+2) \neq 0\]
And instead we have 
\[\sum_{k=1}^\infty \frac{1}{k^2}\frac{1}{1+k^z} = \int_C \zeta(-zn+2) \frac{\csc(\pi n)}{2i} dn\]
Where C is a contour that encloses the residue of \(\zeta\)  at \( n = \frac{1}{z} \) and also picks up the residue at infinity. Since \(\frac{1}{z} > 0\), its very easy to pick make a contour, just choose something like 
\[\int_{c - i \infty}^{c + i \infty} \zeta(-zn+2) \frac{\csc(\pi n)}{2i}dn \]
for \(-1<c< 0\).  

Another example is given by choosing \( f(k) = \frac{\mu(k)}{k^2} \) Now, preforming the same steps as before we get that
\[ (\star_1) \quad \sum_{k=1}^\infty \frac{\mu(k)}{k^2}\frac{1}{1+k^z} = \sum_{k=1}^\infty \frac{\mu(k)}{k^2} \sum_{n=0}^\infty (-1)^n k^{zn} \]
\[\sum_{k=1}^\infty  \frac{\mu(k)}{k^2} \sum_{n=0}^\infty (-1)^n  k^{zn} =   \sum_{n=0}^\infty(-1)^n  \sum_{k=1}^\infty \mu(k) k^{zn-2}\]
\[(\star_2) \quad \sum_{n=0}^\infty (-1)^n \sum_{k=1}^\infty  \mu(k)k^{zn-2} = \sum_{n=0}^\infty (-1)^n \frac{1}{\zeta(-zn+2)}\]
Again, we see that \( \star_1 \) doesn't contribute any residues, but \( \star_2 \) does. In this case, there is no residue at \(\infty\) which is nice. This means we can more easily compute the difference in a 'closed form' kind of way. We have that 
\[\sum_{k=1}^\infty \frac{\mu(k)}{k^2}\frac{1}{1+k^z} - \sum_{n=0}^\infty (-1)^n \frac{1}{\zeta(-zn+2)} = 0\]
When \( \mathfrak{Re}(z) < 0 \). (Note that we again have to apply Cesaro summation to get the RHS to converge). However, when \( \mathfrak{Re}(z) > 0 \) we need to pick up all the zeroes of the \(\zeta\) function. We have a residue at \(-zn +2 = \rho \implies n=\frac{\rho-2}{-z}\). Thus, the difference becomes 
\[\sum_{k=1}^\infty \frac{\mu(k)}{k^2}\frac{1}{1+k^z} - \sum_{n=0}^\infty (-1)^n \frac{1}{\zeta(-zn+2)} = \sum_{p = \frac{2-\rho}{z}} \text{Res}\left( \frac{\csc(\pi p)}{2i \zeta(-zp+2)}\right)\]
An easier way to compute this is with a contour integral, which will pick up everything for us, so we have
\[\sum_{k=1}^\infty \frac{\mu(k)}{k^2}\frac{1}{1+k^z} - \sum_{n=0}^\infty (-1)^n \frac{1}{\zeta(-zn+2)} = \int_{-1/2 - i \infty}^{-1/2 + i \infty} \frac{\csc(\pi n)}{2i \zeta(-zn+2)} dn  \]

I've also tested out this method for \( f(k) = \frac{\lambda(k)}{k^2} \), and I obtain the same sort of results. 

The result seems to be that \( \star_1 \) doesn't contribute any residues, but \( \star_2 \) generally still does. Thus, apparently, \( \frac{1}{1+k^z}\) doesn't appear to create extra residues.

The upshot is that we can study the sum of the poles of \( F^D(-zn) \) by studying 
\( \sum_{k=1}^\infty \frac{f(k)}{1+k^z} - \sum_{n=0}^\infty (F^D(-zn)) \). I imagine that with a suitable choice of functions, we could probably reduce the Riemann hypothesis to some statement about the difference between the sums-- which is interesting, since we are getting some number theoretic statements out of studying functions beyond their natural boundaries (of course though, one might expect that this just makes the statement harder than it was orginally).  

(2)
To get directly to the punch line, it appears that if we look at \( \frac{1}{1+z^k}\), then it appears to contribute residues always. This is, of course, in strong constrast to the last example. Now, lets look at some examples. 

We have already explored the case where \( f(k) = \frac{1}{k!} \), and seen that in this case residues get picked up along the imaginary axis when \( |z| > 1 \). 

Let us consider a closely related sum \( f(k) = \frac{\sin(\frac{\pi}{2} k)}{k!}  \), then \( F^P(k) = \sin(k) \). Lets run roughly the same argument we gave in part 1. 
\[ (\star_1) \quad \sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\frac{1}{1+z^k} =\sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\sum_{n=0}^\infty (-1)^n z^{kn}\]
\[\sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\sum_{n=0}^\infty (-1)^n z^{kn} = \sum_{n=0}^\infty (-1)^n \sin(z^n) \]
Now, notice that in the second step the sum was convergent, and describes a holomorphic function, so the only possible source of extra residues is from \( \star_1 \). We obtain that 
\[\sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\frac{1}{1+z^k} = \sum_{n=0}^\infty (-1)^n \sin(z^n)\]
For \( |z|<1 \). For \( |z|>1 \), as we expect, there are residues that are picked up. Using a contour integral again, we have that 
\[\sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\frac{1}{1+z^k} =\int_{-1/2 - i \infty}^{-1/2 + i \infty} \frac{\csc(\pi n)}{2i} \sin(z^n) dn\]
Now, lets look at a very closely related series, we instead look at
\[\sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\frac{1}{1+\frac{z^k}{2}} \sim \sum_{n=0}^\infty (-1)^n \frac{\sin(z^n)}{2^n}\]
The RHS creates a pretty weird function (I wrote a low quality reddit post about it here)
But, with this change, now both sides converge, thus we can talk about 
\[\sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\frac{1}{1+\frac{z^k}{2}}-  \sum_{n=0}^\infty (-1)^n \frac{\sin(z^n)}{2^n}\]
Which of course is zero for \( |z|<1 \). We could do the same contour integral approach to find the difference. However, there is a better, and far more interesting way. At the \( \star_1 \) step, we could choose to either pick up there residues of the function on the RHS, or we could pick up the residues of the LHS, which are at \(\frac{1}{1+\frac{z^k}{2}} = \infty\). Doing this, the extra residues are at \( 1+\frac{z^k}{2} = 0  \implies \overline{k} = \frac{\ln(2) + \pi i + 2\pi i m}{\ln(z)}, m \in \mathbb{Z} \)

Now, the important part is this: the location of the residues are on the imaginary axis!!! This tells us that summation beyond natural boundary's is deeply connected to extensions of coefficents into the complex plane.

In particular, we obtain the relationship that 
\[ \sum_{k=0}^\infty \frac{\sin(\frac{\pi}{2} k)}{k!}\frac{1}{1+\frac{z^k}{2}} + \sum_{\overline{k}}\frac{f(\overline{k})}{e^{2 \pi i \overline{k}}}\text{Res}(\frac{1}{1+z^k/2}, k = \overline{k}) = \sum_{n=0}^\infty (-1)^n \frac{\sin(z^n)}{2^n} \]
for \(|z|>1 \). Well, actually, as written it won't converge, so instead we need to the second sum it in a slightly different way. This series gives something that is cesaro summable
\[\sum_{\overline{k}} \frac{\sin(\pi/2 \overline{k}) e^{\pi i \overline{k}}}{e^{2 \pi i \overline{k}}}\frac{1}{\overline{k}!}\frac{1}{e^{2 \pi i \overline{k}}}\text{Res}(\frac{1}{1+z^k/2}, k = \overline{k})\] 

Both of the previous examples were pretty closely connected with the gamma function, so let me next pick an example where there is more than just the \( \Gamma \) function involved. I will use the fact that \( \frac{-z e^{-z}}{e^{-z} - 1} = \sum_{n=0}^\infty (-1)^n \frac{B^{+}_n}{n!} z^n \) where \( B^{+}_n = -n \zeta(1-n) \) are the Bernoulli numbers. Then we have that

\[\sum_{k=0}^\infty \frac{-k \zeta(1-k)(-1)^k }{k!} \frac{1}{1+\frac{z^k}{2}} \sim  \sum_{n=0}^\infty \frac{(-1)^n}{2^n} \frac{-z^n e^{-z^n}}{e^{-z^n}-1} \]

These two series are equal for \( |z|<1 \). The difference when \( |z|>1 \) becomes equal to the extra residues of produced by \(\frac{1}{1+z^k/2}\). We can write this difference between the two functions precisely as

\[\sum_{\overline{k}}  \frac{\csc(\pi \overline{k})}{2i} \frac{-\overline{k} \zeta(1-\overline{k})(-1)^\overline{k} }{\overline{k}!}\text{Res}(\frac{1}{1+z^k/2}, k = \overline{k})\] 

So, the difference now depends on the zeta function on at imaginary values. Again, I should emphasize, we started by looking at the Bernoulli numbers-- which only make sense at non-negative integers. But, the solution required us to look at those coefficents at imaginary values.  

READER'S NOTE: I plan to add in sections (3) and (4) later. My hope is that they might provide some framework for continuing coefficients defined on integers into the imaginary values. However, it will be a bit of time before I can go back and write those sections, so I'm posting what I have for now.  

Some thoughts
I think there is some very fascinating and deep going on here. This are some preliminary thoughts on how I think about it. 

First, observe that continuation beyond natural boundaries is not unique. There are many different extensions we could choose. Also, observe that extensions of coefficent functions (I'll denote it \( a_n \) defined on the integers into the complex plane is not unique. I think these two facts are linked together in the following way. Section (2) involved taking an extension of an analytic function (call it \( F(z) \) beyond a natural boundary. For each extension of \( F(z) \) we picked, we essentially induce a definition of \( a_n \) on the complex plane. Therefore, if there exists a canonical extension of \( F(z) \), then we should induce a canonical extension of \( a_n \) into the plane. We can also go the other direction-- by studying what canonical extensions of \( a_n \) look like in general, we could perhaps derive a canonical extension of a given \( a_n \) and use this to induce an extension of \( F(z) \).    

In particular, I think finding an canonical extension of the Liouville \( \lambda \) function onto the complex plane would provide a canonical extension of the Jacobi theta function. This opens a number of deeply fascinating directions of study. For instance, can we study modular forms by studying number theoretic functions extended into the complex plane? I know one implication of this method would be that we can study the primes by looking at extensions of the Moebius function into the complex plane, since that would allow us to study the prime zeta function beyond its natural boundary. There are lots and lots of interesting number-theoretic identities involving Lambert Series, which is basically the form studied in (2). Thus, it conceivable that being able to understand deeply whats going on in (2) could open lots of door for studying number theory. I think a good place to start for studying this would be to look at the Mangolt function. It has a particularly simple relationship \( \sum \Lambda(n) \frac{q^n}{1-q^n} = \sum \ln(n) q^n \). The RHS can be analytically continued using the Lerch Phi function, and this continuation might provide a way to go backwards an induce a value for \( \Lambda \). Probably having a specific case like this to study will be helpful.

Anyway-- thanks for getting all the way down to this point in the post. I'm guessing you probably have lots of questions, and there are probably a few mistakes in my math in the space above-- though I tried to closely check that everything numerically checks out at the least-- so please ask for clarification anywhere that seems confusing or wrong so I can improve my post. Also, any thoughts or insights are appreciated. 

Thanks for reading!  Big Grin
Reply
#2
I haven't absorbed this post fully yet; but I agree with everything. I will point out one thing, that perhaps you haven't studied yet.

Any integral of the form:

\[
\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} g(s)F(s,z)\,ds\\
\]

Is kind of like a "modified" Mellin transform--and they appear frequently in the study of Hyper Geometric series.

I will explain something pretty cool; you may not have seen before. Let's take the standard sum:

\[
e^{-z} = \sum_{n=0}^\infty \frac{(-z)^n}{n!}\\
\]

Then, the sum of residues:

\[
\sum_{n=0}^\infty \text{Res}\left(\Gamma(s)z^{-s}, s = -n\right) = e^{-z}\\
\]

But this sum of residues is just the integral:

\[
e^{-z} = \frac{1}{2\pi i} \int_{1-i\infty}^{1 + i\infty} \Gamma(s)z^{-s}\,ds\\
\]

I'm sure you are well aware of this. But what happens if we add in a pole? Let's say that: \(p(s)\) is a rational function, with poles at \(s = s_j\) for \(1 \le j \le K\); and they are located for \(\Re(s) < 1\). Then:

\[
H(z) = \sum_{n=0}^\infty p(n) \frac{(-z)^n}{n!} + \sum_{j=1}^K \text{Res}\left(\Gamma(s)p(s)z^{-s}, s = s_j\right) = \frac{1}{2\pi i} \int_{1-i\infty}^{1 + i\infty} \Gamma(s)p(s)z^{-s}\,ds\\
\]

Let's for the sake of the argument, assume they are simple poles; so \(p\) is a rational function with only simple poles. (This just saves me from writing a bunch of derivatives and double sums, lol). Then we can reduce our function to:

\[
H(z) = \sum_{n=0}^\infty p(n) \frac{(-z)^n}{n!} + \sum_{j=1}^K a_j \Gamma(s_j) z^{-s_j}\\
\]

Where:

\[
p(s) = \frac{a_j}{s-s_j} + h(s)\\
\]

Where \(h\) is holomorphic in a neighborhood of \(s_j\). It is the residue of \(p\) at \(s = s_j\). The beauty doesn't end here though. No, no it doesn't. We also get that:

\[
\Gamma(s) p(s) = \int_0^\infty H(x)x^{s-1}\,dx\\
\]

BUT; we get this in a very restricted value \(A < \Re(s) < 1\). Where \(A = \text{max}_{1\le j \le K}(\Re(s_j))\). What happens if we remove that singularity--so pick \(\Re(s_J) = A\); then:

\[
\Gamma(s) p(s) = \int_0^\infty \left(H(x) - a_J \Gamma(s_J)x^{-s_J}\right)x^{s-1}\,dx\\
\]

But this is only true for \(B < \Re(s) < A\), where \(B = \text{max}_{j \neq J}(\Re(s_j))\).


This result is especially famous for the exponential function. Where, a little less famous result of Euler...

\[
\Gamma(s) = \int_0^\infty \left(e^{-x}-1\right) x^{s-1}\,dx\\
\]

But this is only true for \(-1 < \Re(s) < 0\). This just equates to deleting the first residue of \(\Gamma\); in which:

\[
e^{-z}-1 = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(s)z^{-s}\,ds
\]

For \(-1 < c < 0\); which can be reduced to:

\[
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(s)z^{-s}\,ds = \frac{1}{2\pi i}\int_{1-i\infty}^{1+i\infty} \left(\Gamma(s) - \frac{1}{s}\right)z^{-s}\,ds\\
\]


In short, I do believe you are rediscovering some things. But I also think this was a great post--and a great line of research Tongue  I wish you nothing but luck. But I do think you should look at the Mellin transform. I still believe this is the missing key. You have used like 4 Mellin type transforms in this post, and if that isn't enough to push you in that direction, I don't know what is! Plus the idea of "adding residues" to make a divergent sum converge is nothing new. But I do believe you are being novel about it Tongue

Think of it, as all the steps you are doing with your "rearrangement"--which is technically wrong--the reason it works is because if you cast it in a Mellin Transform light; everything works; and the Residues pop out Tongue  !!!!

Either way, beautiful post Caleb. If you say you aren't that mathematically inclined--I'd hate to meet the Caleb who is!  Big Grin
Reply
#3
Quote:I'm sure you are well aware of this. But what happens if we add in a pole? Let's say that: \(p(s)\) is a rational function, with poles at \(s = s_j\) for \(1 \le j \le K\); and they are located for \(\Re(s) < 1\). Then:

\[
H(z) = \sum_{n=0}^\infty p(n) \frac{(-z)^n}{n!} + \sum_{j=1}^K \text{Res}\left(\Gamma(s)p(s)z^{-s}, s = s_j\right) = \frac{1}{2\pi i} \int_{1-i\infty}^{1 + i\infty} \Gamma(s)p(s)z^{-s}\,ds\\
\]

Let's for the sake of the argument, assume they are simple poles; so \(p\) is a rational function with only simple poles. (This just saves me from writing a bunch of derivatives and double sums, lol). Then we can reduce our function to:

\[
H(z) = \sum_{n=0}^\infty p(n) \frac{(-z)^n}{n!} + \sum_{j=1}^K a_j \Gamma(s_j) z^{-s_j}\\
\]

Where:

\[
p(s) = \frac{a_j}{s-s_j} + h(s)\\
\]

Where \(h\) is holomorphic in a neighborhood of \(s_j\). It is the residue of \(p\) at \(s = s_j\). The beauty doesn't end here though. No, no it doesn't. We also get that:

\[
\Gamma(s) p(s) = \int_0^\infty H(x)x^{s-1}\,dx\\
\]

BUT; we get this in a very restricted value \(A < \Re(s) < 1\). Where \(A = \text{max}_{1\le j \le K}(\Re(s_j))\). What happens if we remove that singularity--so pick \(\Re(s_J) = A\); then:
These earlier examples appear to just be a straight-forward application of the residue theorem-- it essentially follows directly from the fact that gamma has residues of (-1)^n/n!. Also, I prefer to write these examples in a slightly more suggetive notation as
\[ \sum_{n=0}^\infty (-1)^n p(n) \frac{z^n}{n!} +\sum_{j=1}^K \text{Res}\left((-1)^s p(s)\frac{z^s}{s!}, s = s_j \right) = \frac{1}{2\pi i} \int_{-1-i\infty}^{-1 + i\infty} \frac{\csc(\pi n)}{2i} p(n) \frac{z^n}{n!}ds\]
Actually, writing it this way suggest a small mistake in your formula, which is that \(p(n) \) should actually be \( p(-n) \), since the residues appear at negative values for \( \Gamma \). Also, note one can really easily check the equivalence of the thing I gave to what you gave using the reflection for for the Gamma function. Anyway, I prefer to work with the residue theorem than to work with mellin transform-- I view the residue theorem as more general and the mellin transform as a special case. However, I'd don't have a great reason for this choice, so I'd be interested if you have any compelling reasons to view things has being about 'modified mellin transforms' as you put it. 

Quote:Where \(h\) is holomorphic in a neighborhood of \(s_j\). It is the residue of \(p\) at \(s = s_j\). The beauty doesn't end here though. No, no it doesn't. We also get that:

\[
\Gamma(s) p(s) = \int_0^\infty H(x)x^{s-1}\,dx\\
\]

BUT; we get this in a very restricted value \(A < \Re(s) < 1\). Where \(A = \text{max}_{1\le j \le K}(\Re(s_j))\). What happens if we remove that singularity--so pick \(\Re(s_J) = A\); then:

\[
\Gamma(s) p(s) = \int_0^\infty \left(H(x) - a_J \Gamma(s_J)x^{-s_J}\right)x^{s-1}\,dx\\
\]
Hmm, maybe now I can start to see why the mellin transform idea could be useful-- because the (inverse) mellin transform is invertible. If we rewrite in more suggestive notation again, we have that
\[2\pi i \frac{\csc(\pi s)}{2i} \frac{1}{s!}p(-s) = \int_0^\infty H(x)x^{-s-1}\,dx\]
Okay, I think I maybe have a sense of how this connects now. Lets take some arbitrary invertible integral transform on some series \( H(x). =\sum f(n,x)\) where \( f(n,x) \) is defined only at the non-negative integers \( n\), and \( H(x) \) is the analytical continuation. If we have that
\[ \int_{c - i \infty}^{c + i \infty} f(n,x)F(n) dn =  H(x)\]
Then, we can't compute the LHS, since it involves evaluating \( f(n,x) \) at values of n where it is not defined. However, the RHS is hopefully a well defined complex function. So then, by inverting our integral transformation (denoted \( \mathfrak{I}^{-1}\) we should obtain that
\[ I^{-1} \{\int_{c - i \infty}^{c + i \infty} f(n,x)F(n) dn\} =  I^{-1}\{H(x)\} \implies\]
\[ f(n,x) =  I^{-1}\{H(x)\}\]
But, the RHS makes sense, and provides a definition for f at values it wasn't defined at before. This sounds like a promising approach-- one could study how to continue functions defined on the integers by looking at in what cases it makes sense to talk about the inverse of a given integral transform. 

Quote:Think of it, as all the steps you are doing with your "rearrangement"--which is technically wrong--the reason it works is because if you cast it in a Mellin Transform light; everything works; and the Residues pop out [Image: tongue.gif]  !!!!
To give some background, a lot of my initial attempts in the past have been centered around approaches very similar to the Mellin Transform, but they didn't quite work out. The problem is the Mellin Transform doesn't seem to allow us to get any closer to understanding the inconsistency in picking up residues. Why do some series (like (1) \( \frac{1}{1+k^z} \)) not pick up residues, but others similar series (like (2) \( \frac{1}{1+z^k} \)) do pick up residues. A second trickiness is that if I just take the integral along a certain line, it won't work, sometimes the residues that need to be picked up can only be properly picked up with a more complicated contour. However, this is just from my niave approaches, if you have something more sophisticated in mind in how to get the residues to pop out correctly I'd be interested to know.
Reply
#4
(02/23/2023, 09:35 AM)Caleb Wrote: Actually, writing it this way suggest a small mistake in your formula, which is that \(p(n) \) should actually be \( p(-n) \)

Lmao! I always make this typo! sorry,  Tongue --I'm glad you caught that at least, lol



(02/23/2023, 09:35 AM)Caleb Wrote: Hmm, maybe now I can start to see why the mellin transform idea could be useful-- because the (inverse) mellin transform is invertible. If we rewrite in more suggestive notation again, we have that
\[2\pi i \frac{\csc(\pi s)}{2i} \frac{1}{s!}p(-s) = \int_0^\infty H(x)x^{-s-1}\,dx\]
Okay, I think I maybe have a sense of how this connects now. Lets take some arbitrary invertible integral transform on some series \( H(x). =\sum f(n,x)\) where \( f(n,x) \) is defined only at the non-negative integers \( n\), and \( H(x) \) is the analytical continuation. If we have that
\[ \int_{c - i \infty}^{c + i \infty} f(n,x)F(n) dn =  H(x)\]
Then, we can't compute the LHS, since it involves evaluating \( f(n,x) \) at values of n where it is not defined. However, the RHS is hopefully a well defined complex function. So then, by inverting our integral transformation (denoted \( \mathfrak{I}^{-1}\) we should obtain that
\[ I^{-1} \{\int_{c - i \infty}^{c + i \infty} f(n,x)F(n) dn\} =  I^{-1}\{H(x)\} \implies\]
\[ f(n,x) =  I^{-1}\{H(x)\}\]
But, the RHS makes sense, and provides a definition for f at values it wasn't defined at before. This sounds like a promising approach-- one could study how to continue functions defined on the integers by looking at in what cases it makes sense to talk about the inverse of a given integral transform. 

In no way do I have the answers, Caleb. But so much of your work smells like Mellin transforms. And what you are doing with \(f(n,x)\) and trying to interpolate with an \(f(s,x)\) using sophisticated rules; is precisely what I mean. I'm glad you're getting where I'm coming from. I'm just trying to say, look into it, I think you'll be pleasantly surprised  Cool

Quote:To give some background, a lot of my initial attempts in the past have been centered around approaches very similar to the Mellin Transform, but they didn't quite work out. The problem is the Mellin Transform doesn't seem to allow us to get any closer to understanding the inconsistency in picking up residues. Why do some series (like (1) \( \frac{1}{1+k^z} \)) not pick up residues, but others similar series (like (2) \( \frac{1}{1+z^k} \)) do pick up residues. A second trickiness is that if I just take the integral along a certain line, it won't work, sometimes the residues that need to be picked up can only be properly picked up with a more complicated contour. However, this is just from my niave approaches, if you have something more sophisticated in mind in how to get the residues to pop out correctly I'd be interested to know.

Caleb, I wish I had the answers. These are very very deep questions.--but, I'm going to play through one example, to show you what I mean by Mellin transform tricks which "look" like your tricks:

\[
F(z) = \sum_{k=1}^\infty \frac{1}{1+k^z} = \sum_{j=0}^\infty \sum_{k=1}^\infty (-1)^j k^{jz} = \sum_{j=0}^\infty \sum_{n=0}^\infty \sum_{k=1}^\infty (-1)^j \frac{j^n\log(k)^nz^n}{n!} \\
\]

We can write this is as:

\[
F(z) = \frac{1}{2\pi i} \int_{-1/2-i\infty}^{-1/2+i\infty} \pi \csc(\pi s) \frac{e^{\pi i s}}{1+s^{-z}}\,ds
\]

This converges for \(\Re z > 1\); and conditionally converges for \(\Re(z) = 1\)--(you need some finesse from Fourier analysis to prove this). And \(s^{-z}\) has no zeroes/poles in this domain. Already, we have analytically continued \(F(1+it)\).

Now let's frame it in Mellin Transform Format. We must write it in powers of \(z\); by which we get:

\[
F(z) = \sum_{n=0}^\infty \sum_{j=0}^\infty \frac{1}{2\pi i} \int_{-1/2-i\infty}^{-1/2+i\infty} \pi \csc(\pi s)e^{\pi i s}(-1)^j\frac{j^n(-\log(s)z)^n}{n!}\,ds
\]

Let \(u = \log(s)z\); by which \(du = \frac{z}{s} ds\); and \(s = e^{u/z}\). Where now we have a contour \(C\); which all it does is envelope the poles of \(\pi \csc(\pi s)\) to the left of \(\Re(s) = -1/2\).

So now we get:

\[
F(z) = \sum_{j=0}^\infty \frac{1}{2\pi i} \int_C \pi \csc(\pi e^{u/z})e^{\pi i e^{u/z}} (-1)^jze^{-ju} \frac{e^{u/z}}{z}\,du
\]

Where \(C\) is a contour envoloping the poles of \(\csc\) on the natural numbers; only to a first order; despite the variable change.

This again, only converges for \(\Re(z) \ge 1\). But now we have:

\[
F(z) = \frac{1}{2\pi i} \int_C \pi\csc(\pi e^{u/z})e^{\pi i e^{u/z}}\frac{e^{u/z}}{z(1-e^{-u})}\,du\\
\]

IF WE EXTEND \(F\) TO \(0 < \Re(z) < 1\) WE NEED TO PAY ATTENTION TO THE SINGULARITY WHICH APPEARS AT \(z=0\). WHICH IS THE RESIDUE YOU ARE LOOKING FOR!

Let's write:

\[
H(z) =  \frac{1}{2\pi i} \int_C \pi\csc(\pi e^{u/z})e^{\pi i e^{u/z}}\frac{e^{u/z}}{z(1-e^{-u})} -h(u,z)\,du
\]

Where the integrand is now holomorphic at \(z=0\); rather than blowing up..............

Then, what you have shown and are arguing, which is a common argument:

\[
F(z) = H(z) + \sum \text{Res} h\\
\]

You've split the results in to two parts. Where the RHS is holomorphic for \(\Re(z) > 0\); and the LHS was only holomorphic for \(\Re(z) > 1\).

The ultimate state of what you are doing is analytic continuation, but it's the analytic continuation of sums. There's a reason every analytic continuation of sums uses the Mellin transform. All's I'm saying.

I apologize if this post is a little spotty. I can tend to lose clarity in long posts. I'm just trying to show through example the importance of Mellin transforms!

Sincere Regards, James
Reply
#5
ok just my intuition and 50 cent for now but 

Abel-Plana formula

path integral

Carlsons theorem

come to mind.


Also I noticed you used "my" summability method.


But i need to think more about it.
in a hurry...


regards

tommy1729
Reply
#6
Quote:The ultimate state of what you are doing is analytic continuation, but it's the analytic continuation of sums. There's a reason every analytic continuation of sums uses the Mellin transform. All's I'm saying.
No! This isn't whats happening in the post! My pedagogy in the post is terrible, so I apologize for not drawing attention to a crucial fact of whats happening in the post. I'm looking at summations past natural boundaries! By natural boundary I mean there are a dense set of singularites. This means no argument can be made on the basis of analytical continuation-- there is no analytical continuation for these series! This fact explains why the residues are not picked up for some domain, but then all the sudden picked up in another domain-- this automatically forces the function to be discontinuous at the point we decide to pick up the residues. Indeed, at the point where the extra residues get picked up is where the natural boundary is located. This is why I fuss about the canonical extension in the post-- becuase there isn't only one extension of a function beyond its natural boundary. However, I have chosen my series in my post very carefully-- I picked them so that they actually converge on both sides of the natural boundary, so that there is a natural extension of the function beyond the natural boundary. Actually, I think this is kind of a natural extension of the little circle method you had mentioned a few days ago. The little circle method takes arcs inside the circle to try to compute the residues. My approach is, essentially, to take a contour outside the circle. 

There's also a second issue with the mellin transform approach, which is that it only works under certain growth conditions, and I don't limit myself to those growth conditions here. In general, my testing in the past suggests to me that when we start to consider functions with larger growth conditions, new behaviour starts to emerge that wasn't there with the slower growing function. Thus, we can't just straightforwardly extend the mellin transform approach to work in many of the cases I'm using the residue theorem in the post.

Also, I should add that ultimately, my goal in studying of all of this is to produce some theory about (non-analytic) continuation beyond natural boundaries. Ever since I saw the beautiful graphs of modular forms such as the Jacobi theta function two years ago, I've wondered what lies on the other side. Thus, these objects I'm studying are motivated by an early attempt to study the relationship between complex functions inside and outside their natural boundary so that I might eventually figure out the proper way to gaze upon modular forms in the lower half plane.
Reply
#7
Apart from the idea of reflection formula's ( which may or may not be a good idea )

I want to take for example the prime zeta function P(s).

It is well known to have formulas that converge for Re(s) > 1 or Re(s) > 0.

There also exists a formula for Re(s) > 1/2 :

Assuming the RH then \[P(s) = s \int_2^\infty \pi(x) x^{-s-1}dx = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+  L(s) \\  = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\] where the latter integrals converge and are analytic for Re(s) > 1/2.

\[ Li(x) = \int_2^x \frac{dt}{\log t}\] \[L(s) = s\int_2^\infty Li(x) x^{-s-1}dx  = \int_2^\infty \frac{x^{-s}}{\log x}dx \] \[ = L(2)+\int_2^s L'(u)du = L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\]

since \[L'(s) = -\int_2^\infty x^{-s}dx = \frac{2^{1-s}}{s-1}\]

I guess that is clear to all here.



Now the natural boundary at Re(s) = 0 is made completely of log singularities getting dense.

Maybe we should make distinctions of the type of natural boundaries we are getting.

I mean for instance 

g(x) = (1 - x)(1 - x^3)(1 - x^5)(1 - x^7)... 

or h(x) = 1 + x^2 + x^2^2 + x^2^3 + ...

are having "different" natural boundaries, like accumilations of zeros.

i SAID MAYBE lol



But now,

what is the value of P(s) for Re(s) < 0 ??

OR is this type of natural boundary unsuitable because it has logs instead of poles and zero's ??
AND IF UNSUITABLE , WHAT DOES THAT MEAN ??  no continuation for some but for others we do ?

I will definitely have a talk about that with my friend mick.


I want to point out that the derivative of the prime zeta has an infinite amount of poles instead of logs.

and the inverse of the derivative of prime zeta has an infinite amount of zero's on Re(s) = 0.


Im holding back on making conjectures , im a bit confused.


regards

tommy1729
Reply
#8
(02/24/2023, 12:30 AM)tommy1729 Wrote: Apart from the idea of reflection formula's ( which may or may not be a good idea )

I want to take for example the prime zeta function P(s).

It is well known to have formulas that converge for Re(s) > 1 or Re(s) > 0.

There also exists a formula for Re(s) > 1/2 :

Assuming the RH then \[P(s) = s \int_2^\infty \pi(x) x^{-s-1}dx = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+  L(s) \\  = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\] where the latter integrals converge and are analytic for Re(s) > 1/2.

\[ Li(x) = \int_2^x \frac{dt}{\log t}\] \[L(s) = s\int_2^\infty Li(x) x^{-s-1}dx  = \int_2^\infty \frac{x^{-s}}{\log x}dx \] \[ = L(2)+\int_2^s L'(u)du = L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\]

since \[L'(s) = -\int_2^\infty x^{-s}dx = \frac{2^{1-s}}{s-1}\]

I guess that is clear to all here.



Now the natural boundary at Re(s) = 0 is made completely of log singularities getting dense.

Maybe we should make distinctions of the type of natural boundaries we are getting.

I mean for instance 

g(x) = (1 - x)(1 - x^3)(1 - x^5)(1 - x^7)... 

or h(x) = 1 + x^2 + x^2^2 + x^2^3 + ...

are having "different" natural boundaries, like accumilations of zeros.

i SAID MAYBE lol



But now,

what is the value of P(s) for Re(s) < 0 ??

OR is this type of natural boundary unsuitable because it has logs instead of poles and zero's ??
AND IF UNSUITABLE , WHAT DOES THAT MEAN ??  no continuation for some but for others we do ?

I will definitely have a talk about that with my friend mick.


I want to point out that the derivative of the prime zeta has an infinite amount of poles instead of logs.

and the inverse of the derivative of prime zeta has an infinite amount of zero's on Re(s) = 0.


Im holding back on making conjectures , im a bit confused.


regards

tommy1729
This is a good question. Let me try to answer it by sharing my motivation in studying the series I decided to study in the post.

Analytical continuation beyond natural boundaries is hard! I don't think anyone knows how to do it in general. I suspect it can't be done in general-- because I don't think it would be meaningful to analytically continue a series that has purely random coefficents for instance. Since the problem is so hard, I'm choosing to study the easiest possible example I could come up with. 

The examples I study have a really nice property. For instance, consider the following series 
\[f(x)=\sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\]
This series has a natural boundary, since 1+x^n provides a dense set of poles. So, it cannot be analytically continued to |x|>1. However, the series is still well-defined outside of |x|>1. In particular, I can compute f(2) by just plugging into the series. This gives me a very natural candidate for a definition of \( f(x)\) outside the boundary.

So, I choose to study these much easier functions, and try to analyze what sort of properties they have. My goal looks like this
\[ \text{Get a bunch of very easy examples of functions with natural boundaries } \to \]
\[\text{ Study those examples in depth, and understand the mechanism underlying how those functions behave } \to \]
\[\text{ Try to generalize that mechanism into harder functions }\]

The prime zeta function is definitely in the category of "harder functions." I don't know if logarithmic singularities will cause the continuation to behave differently than regular poles-- that's something I will only find out once I've studied the easy examples in depth!

So, its not neccesariy that the prime zeta function has properties that make it unsuitable for continuation-- its that I haven't yet figured out the right way to do continuation beyond natural boundaries.
Reply
#9
(02/24/2023, 12:55 AM)Caleb Wrote:
(02/24/2023, 12:30 AM)tommy1729 Wrote: Apart from the idea of reflection formula's ( which may or may not be a good idea )

I want to take for example the prime zeta function P(s).

It is well known to have formulas that converge for Re(s) > 1 or Re(s) > 0.

There also exists a formula for Re(s) > 1/2 :

Assuming the RH then \[P(s) = s \int_2^\infty \pi(x) x^{-s-1}dx = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+  L(s) \\  = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\] where the latter integrals converge and are analytic for Re(s) > 1/2.

\[ Li(x) = \int_2^x \frac{dt}{\log t}\] \[L(s) = s\int_2^\infty Li(x) x^{-s-1}dx  = \int_2^\infty \frac{x^{-s}}{\log x}dx \] \[ = L(2)+\int_2^s L'(u)du = L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\]

since \[L'(s) = -\int_2^\infty x^{-s}dx = \frac{2^{1-s}}{s-1}\]

I guess that is clear to all here.



Now the natural boundary at Re(s) = 0 is made completely of log singularities getting dense.

Maybe we should make distinctions of the type of natural boundaries we are getting.

I mean for instance 

g(x) = (1 - x)(1 - x^3)(1 - x^5)(1 - x^7)... 

or h(x) = 1 + x^2 + x^2^2 + x^2^3 + ...

are having "different" natural boundaries, like accumilations of zeros.

i SAID MAYBE lol



But now,

what is the value of P(s) for Re(s) < 0 ??

OR is this type of natural boundary unsuitable because it has logs instead of poles and zero's ??
AND IF UNSUITABLE , WHAT DOES THAT MEAN ??  no continuation for some but for others we do ?

I will definitely have a talk about that with my friend mick.


I want to point out that the derivative of the prime zeta has an infinite amount of poles instead of logs.

and the inverse of the derivative of prime zeta has an infinite amount of zero's on Re(s) = 0.


Im holding back on making conjectures , im a bit confused.


regards

tommy1729
This is a good question. Let me try to answer it by sharing my motivation in studying the series I decided to study in the post.

Analytical continuation beyond natural boundaries is hard! I don't think anyone knows how to do it in general. I suspect it can't be done in general-- because I don't think it would be meaningful to analytically continue a series that has purely random coefficents for instance. Since the problem is so hard, I'm choosing to study the easiest possible example I could come up with. 

The examples I study have a really nice property. For instance, consider the following series 
\[f(x)=\sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\]
This series has a natural boundary, since 1+x^n provides a dense set of poles. So, it cannot be analytically continued to |x|>1. However, the series is still well-defined outside of |x|>1. In particular, I can compute f(2) by just plugging into the series. This gives me a very natural candidate for a definition of \( f(x)\) outside the boundary.

So, I choose to study these much easier functions, and try to analyze what sort of properties they have. My goal looks like this
\[ \text{Get a bunch of very easy examples of functions with natural boundaries } \to \]
\[\text{ Study those examples in depth, and understand the mechanism underlying how those functions behave } \to \]
\[\text{ Try to generalize that mechanism into harder functions }\]

The prime zeta function is definitely in the category of "harder functions." I don't know if logarithmic singularities will cause the continuation to behave differently than regular poles-- that's something I will only find out once I've studied the easy examples in depth!

So, its not neccesariy that the prime zeta function has properties that make it unsuitable for continuation-- its that I haven't yet figured out the right way to do continuation beyond natural boundaries.

\[f(x)=\sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\]

Your example is why I talk about reflection formula's.

Notice f(oo) = limit of 

\[f(oo)=\sum_{n=0}^\infty \frac{oo^n}{1+oo^n} \frac{1}{2^n}\]

=

\[f(oo)=\sum_{n=0}^\infty 1 *  \frac{1}{2^n}\]

Which means f(oo) = 2.

Now think about 

f(0) = 0

because

\[f(0)=\sum_{n=0}^\infty \frac{0^n}{1+0^n} \frac{1}{2^n}\]

and 

\[f(1/x)=\sum_{n=0}^\infty \frac{x^{-n}}{1+x^{-n}} \frac{1}{2^n}\]

which converges for x =/= 0.

But we know f(1/0) = f(oo) = 2.

Now 1/x takes from inside the natural function boundary to outside.

and there is only one boundary.

This makes formulas like 

f(1/x) = g( f(x) )

for some g maybe a good idea.

and the idea mainly comes from the shape of the boundary and not so much of the function itself.

Although a natural boundary being the unit circle does not prove the existance of a g function , and i guess with little effort we can find examples where g exists and where not.

Riemann mapping function is maybe key here since it can map any natural boundary to the unit circle.
and then from there we might get to find a valid g function if it exists.

However assume a subset D of the unit disk where x is univalent.

now if x is element of D and f(1/x) is not univalent we have a problem constructing a function g.

 another idea is 

f(conjugate 1/x ) = g( f(x) )

with similar logic.

I mentioned these before in pieces here and there in other threads.
I wanted to make things more formal and clear.

Now there is more to the shape of the boundary than meets the idea i think.

dirichlet series converges beyond a straight line.
So the boundary is a line.

Now this relates to a zeta like function in particular , and dirichlet generalized sums , functions that contain n^s.

This means functions like that ( prime zeta , the function mick posted on MSE etc ) are naturally related to bernouilli numbers !

because basically the bernouilli numbers ARE the zeta function.

So , what tf am I saying ?

I think it is natural to use bernouilli numbers when the functions contain n^s.

And so every type of function or more so boundary has its own special numbers.

So when does it depend rather on function and when more on boundary ? 

well that probably depends on how many times you use resummation; double sums , triple sums , etc.

; if your n^s terms occur as main or as one of the inner sums.


***

you used sum of products.

I consider the idea of sums of compositions , which might relate to the riemann mapping idea.

But more research needs to be done.

Ofcourse one can consider 

sum f(n)^2 , sum f(n)^3 etc

and have a taylor logic.

***

***

you mentioned lambert series.

i agree , but admit at the moment i do not have much to say about them.

it is a mystery for now.

***

I want to add more


f(conjugate 1/x ) = g( f(x) )

this is like having the boundary as an actual mirror.

the reflection is continu at the boundary.

continu points within natural boundaries are actually a thing.



Now lets look at boundary being the imag line.

and it has a continu point at 0.

suppose we also " should " have f( - s) = f(s) like we did in mick's post.

IF f(x) is (real) continu at x = 0 and for real x > 0 the function is analytic then

when 

f(-s) = f(s) " should " be true 

claiming

f(x) = f(-x) 

ACTUALLY works for real x.

but may or may not work for the upper plane.

f(s) = f (- conj s)

then again might work !!

which now relates to the circle idea above.

But "f(s) = f(-s)" does not always imply f(0) = 0 if f ( y i) for real y is not analytic EVEN IF F(0) IS ANALYTIC.


OK TIME FOR AN EXAMPLE

very similar to the function mick posted on MSE :


\[f(s)=\sum_{n=1}^\infty \frac{1}{n^s +n^{-s}} \frac{1}{n^2}\]

Now " clearly "

f(-s) = f(s) or it should.

But it is not analytic at Re(s) = 0 anywhere.

however it is real-continu at f(0).

But is f(0) = f(-0) = 0 ?

\[f(s)=\sum_{n=1}^\infty \frac{1}{n^s +n^{-s}} \frac{1}{n^2}\]

\[f(0)=\sum_{n=1}^\infty \frac{1}{n^0 +n^{-0}} \frac{1}{n^2}\]

\[f(0)=\sum_{n=1}^\infty \frac{1}{2} \frac{1}{n^2}\]

so f(0) = zeta(2)/2 = pi^2 / 12.

not zero.

This function converges better than the one mick posted btw.
which explains morally the real-continu and definite value at 0.

notice real-continu , not complex continu ofcourse.

but f(0) is not 0 and this leads to issues.
however adding constants makes this true, that is unlikely to be the " magic solution " ofcourse.

Lets analyse the value at f(0) and the function f(s) and in particular resummation and the f(-s) = f(s) idea ;


f(s) = zeta(3s + 2) - zeta(5s + 2) + zeta(7s + 2) - ... or something like that.
( see the MSE post by mick and caleb answer there )

while 

f(-s) = zeta(-3s + 2) - zeta(-5s + 2) + zeta(-7s + 2) - ... or something like that.

The cesaro sum of those two ( f(s) , f(-s) ) should be equal.
( since they agree on 1+1+1+... )

But 

zeta(3s +2) - zeta(-3s + 2) is not 0.

and it is unlikely that 

  zeta(3s + 2) - zeta(5s + 2) + zeta(7s + 2) - ... - (  zeta(-3s + 2) - zeta(-5s + 2) + zeta(-7s + 2) - ... ) = 0

even when rearranging terms.

the symmetry F(-s) = F(s) is not within those zeta terms.

in fact even a generalization where +2 and /n^2 is replaced by + l and /n^l would probably not make it happen.


If however 
 zeta(-3s + 2) - zeta(-5s + 2) + zeta(-7s + 2) - ...

or the other one EACH equal 0 then it must be true.

But this means f(0) = 0.

aha.

but also

zeta((2k+1)s + 2) for s = 0 = zeta(2).

then

f(0) = zeta(2) + zeta(2) + ...

which makes no sense.

Now your relection formula needs to be consistant with all of this ofcourse.

So this shows there is probably no reflection formula for this one.

YET

if we " naively ? " plug in -2 + i or we plug in +2 + i

we get similar results ( values ) for

f( -2 + i )

or f( +2 + i )

and their conjugates.


something to consider.


another example to study is 

\[f(s)=\sum_{n=1}^\infty \frac{1}{n^s  - n^{-s}} \frac{1}{n^2}\]

***

The bernouilli numbers came from summing n^s for s positive !

this is remarkable because we use them here mainly on the other side , for the standard def of the functions.

I said before the type of numbers relate to the boundary shape.

But apparantly on the divergeant side !


So

studying truncated sums such as n^s that diverge to oo are cruxial.

In combination with the riemann mapping and the analogue of truncated sums and the shape of boundary we should be able to arrive at the related special numbers for a given boundary.

However all sums are related to bernouilli numbers. see the indefinite sum formulas.

which means that bernouilli is somewhat universal !

and thus zeta method desummations are somewhat universal and should give same results as compared to other methods.


The riemann mapping and symmetry like -x , 1/x , conjugate

relate the circle to the line and likewise the 

taylor/laurent to dirichlet or fourier.

So i know i mentioned dirichlet stuff and not the taylor stuff like you did also as 50 percent of your cases , but i think they are related.

However an exp does not equal 0 , that might be a bit of a distinction.

All of math is related.


Maybe we can find 

f(-s) = f(s)

or 

f(-s) = - f(s)

with natural boundaries.

maybe we should ??


Finally higher exponential levels should probably be considered a resummation.
So the lower ones might be more important.



regards

tommy1729
Reply
#10
(02/23/2023, 08:15 PM)Caleb Wrote:
Quote:The ultimate state of what you are doing is analytic continuation, but it's the analytic continuation of sums. There's a reason every analytic continuation of sums uses the Mellin transform. All's I'm saying.
No! This isn't whats happening in the post! My pedagogy in the post is terrible, so I apologize for not drawing attention to a crucial fact of whats happening in the post. I'm looking at summations past natural boundaries! By natural boundary I mean there are a dense set of singularites. This means no argument can be made on the basis of analytical continuation-- there is no analytical continuation for these series! This fact explains why the residues are not picked up for some domain, but then all the sudden picked up in another domain-- this automatically forces the function to be discontinuous at the point we decide to pick up the residues. Indeed, at the point where the extra residues get picked up is where the natural boundary is located. This is why I fuss about the canonical extension in the post-- becuase there isn't only one extension of a function beyond its natural boundary. However, I have chosen my series in my post very carefully-- I picked them so that they actually converge on both sides of the natural boundary, so that there is a natural extension of the function beyond the natural boundary. Actually, I think this is kind of a natural extension of the little circle method you had mentioned a few days ago. The little circle method takes arcs inside the circle to try to compute the residues. My approach is, essentially, to take a contour outside the circle. 

There's also a second issue with the mellin transform approach, which is that it only works under certain growth conditions, and I don't limit myself to those growth conditions here. In general, my testing in the past suggests to me that when we start to consider functions with larger growth conditions, new behaviour starts to emerge that wasn't there with the slower growing function. Thus, we can't just straightforwardly extend the mellin transform approach to work in many of the cases I'm using the residue theorem in the post.

Also, I should add that ultimately, my goal in studying of all of this is to produce some theory about (non-analytic) continuation beyond natural boundaries. Ever since I saw the beautiful graphs of modular forms such as the Jacobi theta function two years ago, I've wondered what lies on the other side. Thus, these objects I'm studying are motivated by an early attempt to study the relationship between complex functions inside and outside their natural boundary so that I might eventually figure out the proper way to gaze upon modular forms in the lower half plane.

OH!

Okay!

Sorry, I apologize.

If it's a non-analytic continuation than that is out of my purview Tongue . I hate real analysis stuff!

May I then suggest John B. Conway's book (A different John Conway than John H. Conway (game of life) that everyone knows of):

Functions of One Complex Variable II

The first book in this series deals solely with holomorphy and is one of the best books on the manner. The second book deals entirely with non-holomorphic functions in the complex plane. It manages to explain how to recover Cauchy's integral theorem, and all those nice results. But with discontinuous boundaries and the like! It's basically real analysis for complex functions. It discusses "nearness" to holomorphy--and the likes. Very very good book, but it just wasn't my style of math.

And I see precisely what you mean now. I guess my point was that, if there is a HOLOMORPHIC continuation, it'll probably be representable through the Mellin transform methods. But if you are referring to single variable analytic/C^k continuations, then yes. The mellin transform is useless because it forces holomorphy Tongue 

I suggest the Conway book though, because it's the only thing I've ever seen related to these ideas!
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Pictures of some generalized analytical continuations Caleb 18 7,055 03/17/2023, 12:56 AM
Last Post: tommy1729
  double functional equation , continuum sum and analytic continuation tommy1729 6 2,939 03/05/2023, 12:36 AM
Last Post: tommy1729
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 14,887 02/22/2023, 08:58 PM
Last Post: tommy1729
  continuation of fix A to fix B ? tommy1729 22 9,897 02/06/2023, 11:59 PM
Last Post: tommy1729
Question Tetration Asymptotic Series Catullus 18 10,275 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 6,008 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 1,590 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 1,499 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 3,416 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Hey, Everyone; been a long time... JmsNxn 17 17,060 01/28/2021, 09:53 AM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)