02/23/2023, 09:35 AM
Quote:I'm sure you are well aware of this. But what happens if we add in a pole? Let's say that: \(p(s)\) is a rational function, with poles at \(s = s_j\) for \(1 \le j \le K\); and they are located for \(\Re(s) < 1\). Then:These earlier examples appear to just be a straight-forward application of the residue theorem-- it essentially follows directly from the fact that gamma has residues of (-1)^n/n!. Also, I prefer to write these examples in a slightly more suggetive notation as
\[
H(z) = \sum_{n=0}^\infty p(n) \frac{(-z)^n}{n!} + \sum_{j=1}^K \text{Res}\left(\Gamma(s)p(s)z^{-s}, s = s_j\right) = \frac{1}{2\pi i} \int_{1-i\infty}^{1 + i\infty} \Gamma(s)p(s)z^{-s}\,ds\\
\]
Let's for the sake of the argument, assume they are simple poles; so \(p\) is a rational function with only simple poles. (This just saves me from writing a bunch of derivatives and double sums, lol). Then we can reduce our function to:
\[
H(z) = \sum_{n=0}^\infty p(n) \frac{(-z)^n}{n!} + \sum_{j=1}^K a_j \Gamma(s_j) z^{-s_j}\\
\]
Where:
\[
p(s) = \frac{a_j}{s-s_j} + h(s)\\
\]
Where \(h\) is holomorphic in a neighborhood of \(s_j\). It is the residue of \(p\) at \(s = s_j\). The beauty doesn't end here though. No, no it doesn't. We also get that:
\[
\Gamma(s) p(s) = \int_0^\infty H(x)x^{s-1}\,dx\\
\]
BUT; we get this in a very restricted value \(A < \Re(s) < 1\). Where \(A = \text{max}_{1\le j \le K}(\Re(s_j))\). What happens if we remove that singularity--so pick \(\Re(s_J) = A\); then:
\[ \sum_{n=0}^\infty (-1)^n p(n) \frac{z^n}{n!} +\sum_{j=1}^K \text{Res}\left((-1)^s p(s)\frac{z^s}{s!}, s = s_j \right) = \frac{1}{2\pi i} \int_{-1-i\infty}^{-1 + i\infty} \frac{\csc(\pi n)}{2i} p(n) \frac{z^n}{n!}ds\]
Actually, writing it this way suggest a small mistake in your formula, which is that \(p(n) \) should actually be \( p(-n) \), since the residues appear at negative values for \( \Gamma \). Also, note one can really easily check the equivalence of the thing I gave to what you gave using the reflection for for the Gamma function. Anyway, I prefer to work with the residue theorem than to work with mellin transform-- I view the residue theorem as more general and the mellin transform as a special case. However, I'd don't have a great reason for this choice, so I'd be interested if you have any compelling reasons to view things has being about 'modified mellin transforms' as you put it.
Quote:Where \(h\) is holomorphic in a neighborhood of \(s_j\). It is the residue of \(p\) at \(s = s_j\). The beauty doesn't end here though. No, no it doesn't. We also get that:Hmm, maybe now I can start to see why the mellin transform idea could be useful-- because the (inverse) mellin transform is invertible. If we rewrite in more suggestive notation again, we have that
\[
\Gamma(s) p(s) = \int_0^\infty H(x)x^{s-1}\,dx\\
\]
BUT; we get this in a very restricted value \(A < \Re(s) < 1\). Where \(A = \text{max}_{1\le j \le K}(\Re(s_j))\). What happens if we remove that singularity--so pick \(\Re(s_J) = A\); then:
\[
\Gamma(s) p(s) = \int_0^\infty \left(H(x) - a_J \Gamma(s_J)x^{-s_J}\right)x^{s-1}\,dx\\
\]
\[2\pi i \frac{\csc(\pi s)}{2i} \frac{1}{s!}p(-s) = \int_0^\infty H(x)x^{-s-1}\,dx\]
Okay, I think I maybe have a sense of how this connects now. Lets take some arbitrary invertible integral transform on some series \( H(x). =\sum f(n,x)\) where \( f(n,x) \) is defined only at the non-negative integers \( n\), and \( H(x) \) is the analytical continuation. If we have that
\[ \int_{c - i \infty}^{c + i \infty} f(n,x)F(n) dn = H(x)\]
Then, we can't compute the LHS, since it involves evaluating \( f(n,x) \) at values of n where it is not defined. However, the RHS is hopefully a well defined complex function. So then, by inverting our integral transformation (denoted \( \mathfrak{I}^{-1}\) we should obtain that
\[ I^{-1} \{\int_{c - i \infty}^{c + i \infty} f(n,x)F(n) dn\} = I^{-1}\{H(x)\} \implies\]
\[ f(n,x) = I^{-1}\{H(x)\}\]
But, the RHS makes sense, and provides a definition for f at values it wasn't defined at before. This sounds like a promising approach-- one could study how to continue functions defined on the integers by looking at in what cases it makes sense to talk about the inverse of a given integral transform.
Quote:Think of it, as all the steps you are doing with your "rearrangement"--which is technically wrong--the reason it works is because if you cast it in a Mellin Transform light; everything works; and the Residues pop outTo give some background, a lot of my initial attempts in the past have been centered around approaches very similar to the Mellin Transform, but they didn't quite work out. The problem is the Mellin Transform doesn't seem to allow us to get any closer to understanding the inconsistency in picking up residues. Why do some series (like (1) \( \frac{1}{1+k^z} \)) not pick up residues, but others similar series (like (2) \( \frac{1}{1+z^k} \)) do pick up residues. A second trickiness is that if I just take the integral along a certain line, it won't work, sometimes the residues that need to be picked up can only be properly picked up with a more complicated contour. However, this is just from my niave approaches, if you have something more sophisticated in mind in how to get the residues to pop out correctly I'd be interested to know.!!!!


!!!!