02/08/2023, 04:53 AM
(02/08/2023, 03:38 AM)JmsNxn Wrote:Thank you for your insightful response-- I particularly liked the idea that \(\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \zeta(s) \) because it should allow me to recast the question in terms of asking when the fractional derivative operation commutes with other operations (for instance, without actually looking at it, I imagine that granting that the operation can be undistributed from the infinite summation would "prove" the result).(02/07/2023, 09:32 PM)Caleb Wrote: I've recently posted a question on mathoverflow which some of you who know fractional calculus may be able to answer: https://mathoverflow.net/questions/44033...-sum-fn-nx
I would appreciate any thoughts on the matter
Thanks,
Caleb
Hey, Caleb!
What you have essentially detailed is the Exponential Differintegral--or the Riemann Liouville Differintegral. The standard way to right this, is to use the Mellin transform, but there are many different possible expansions for it.
In essence, the way I like to write it is as:
\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_0^\infty f(x-y)y^{s-1}\,dy\\
\]
You will notice instantly that:
\[
\frac{d^{-s}}{dx^{-s}} e^x = e^x\\
\]
Now, this is not the entire definition, as the full definition would be written using arcs in \(\mathbb{C}\). In which we write--assuming \(f\) is integrable on \(\gamma\) where \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\) on the Riemann sphere:
\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_\gamma f(x-y)y^{s-1}\,dy\\
\]
So for example, if we were to take \(f(x) = e^{-x}\), we want to integrate across \([-\infty,0]\), upon which we are given the formula:
\[
\frac{d^{-s}}{dx^{-s}} e^{-x} = \frac{e^{-\pi i s}}{\Gamma(s)} \int_0^\infty f(x+y)y^{s-1}\,dy = e^{-\pi i s} e^{-x}\\
\]
Which, is perfectly generalizable too:
\[
\frac{d^{-s}}{dx^{-s}} e^{\lambda x} = \lambda^{-s} e^{\lambda x}\\
\]
And this is a rigorous operator on a specific space of entire functions \(f\), those which have some sort of decay at \(\infty\).
The relation between this operators has been around for centuries; where we can rewrite Riemann's famous expression for the zeta function as:
\[
\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \sum_{n=1}^\infty e^{-nx} = \zeta(s)\\
\]
As to your question, I am a tad confused, but I believe this does exist, though you would prove it differently. I believe you are absolutely correct--but if memory serves me right, this is a rewording of a known result.
But just for fun, let's prove your result
Okay, so take the function:
\[
\frac{d^{s}}{dx^{s}}\Big{|}_{x=0} f(x)= F(s)\\
\]
And let's take your function:
\[
G(s) = f^{(s)}(0)\\
\]
Which we state that \(H(s) = F(s) - G(s)\), where \(H(n) = 0\) for all \(n\ge 0\). The function \(F(s)\) is exponentially bounded, in such a manner that: \(|F(s)| = O(e^{\rho \Re(s) + \tau |\Im(s)|})\). The value \(\tau \in (0,\pi/2)\) and \(\rho \in \mathbb{R}^+\). Your function \(G(s)\) also exists in this space--I'm a little too lazy to prove it right now, but trust me it is. Therefore \(H(s)\) is also in this space.
The thing is.... If \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) and is holomorphic for \(\Re(s) > 0\), then if \(H(n) = 0\) then \(H=0\). This is what I call the Ramanujan Identity Theorem, as it's a direct corollary of Ramanujan's master theorem.
I can add more details if you like![]()
In short, from my brief analysis you are absolutely correct, and this is an expression for the Exponential Differintegral!
Great job!
EDIT: Also, please note this is just a rough walk through. You do need more elbow grease to iron everything out. Why I didn't answer the MO question, MO asks for a higher standard, and I'm too lazy to work out all the details right now!
Your thoughts about \( H(s) = 0 \) are quite similar to my own thoughts-- the bound \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) is interesting. My own go-to bound for uniqueness is Carlson's one that functions of exponential type less than \( \pi \) (i.e. \( H(s) < e^{\pi |z|} \)) gain uniqueness for their definition on the integers. Its interesting to see the bound you use is slightly less poweful on the whole complex plane but allows stronger growth on the real line. That's pretty cool-- I had always suspected there was some way to trade-off growth in the complex plane in exchange for some extra growth on the real line, and its nice to see that such a thing can actually be done.
Actually, I will tell you how this question came about. In passing, I saw one of your posts about the Riemann Liouville Differintegral, and how its values are uniquely determined under sufficent growth conditions. You had claimed in that post that this means the fractional integral is basically unique (with some extra conditions like the derivative of exp is exp). However, the growth conditions are crucial for uniqueness-- for instance, Carelsons theorem for uniqueness of a complex function that agrees on an integer sequence needs to be exponential type less than pi, otherwise \( \sin(\pi z) = 0 \) on the integers. So, without growth conditions, there isn't true uniqueness on interpolation problems, there is typically a bunch of different solutions.
HOWEVER, in working with divergent series and analytical continuations, I run into sequences which don't satisfy the required growth conditions, however, their analytical continuations depend on the interpolating sequence. In these cases, math itself is forced to pick which interpolating is correct. For instance, one case I considered before is \( \sum (-1)^n n! x^n \). Such a sequence grows far to fast to be unique, but if we choose the most natural option and replace \( (-1)^n \) with \( e^{\pi i z} \) and \( n! \) with \( \Gamma(z+1) \) and then do a contour integration in the complex plane, it gives the correct answer. Why does math "pick" \( \Gamma(z+1) \) over \( \Gamma(z+1) + \sin(\pi z)\)? Or, why doesn't it pick one of the other extends of the gamma function (perhaps one that isn't log convex for instance)? I'd like to know how it makes its decision.
Thus my secret motivation for asking the quesiton on mathoverflow is to study how math makes it choice. The thing is, if we pick the right choice of functions, then
\[\sum_{n=1}^\infty f(n)n^x- \sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} \zeta(-x-k)\]
will converge to some specific value, but \(f(n)\) will grow fast enough that uniqueness fails. In this case, math is forced to make a choice, and we can directly compute what choice math has chosen to make in these cases. Put another way, the difference of these two series induces a defintion of fractional integration even past where uniqueness fails. Does it always pick something related to the Riemann-Liouville Differintegral? Are there conditions where it switches to the one of the different defintions of fractional integral? Is the Riemann-Liouville Differintegral a special case of something more general that always converges? Or maybe, does non-uniqueness induce branch cuts on the original functions, and different non-unique chocies correspond to different branches? Really, I think many of the cool and fascinating functions have fast growth rates, so such questions are neccesary to grapple with if one wants to work with those objects in a serious way.



