Fractional Integration
#1
I've recently posted a question on mathoverflow which some of you who know fractional calculus may be able to answer: https://mathoverflow.net/questions/44033...-sum-fn-nx
I would appreciate any thoughts on the matter

Thanks, 
Caleb
Reply
#2
(02/07/2023, 09:32 PM)Caleb Wrote: I've recently posted a question on mathoverflow which some of you who know fractional calculus may be able to answer: https://mathoverflow.net/questions/44033...-sum-fn-nx
I would appreciate any thoughts on the matter

Thanks, 
Caleb

Hey, Caleb!

What you have essentially detailed is the Exponential Differintegral--or the Riemann Liouville Differintegral. The standard way to right this, is to use the Mellin transform, but there are many different possible expansions for it.

In essence, the way I like to write it is as:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_0^\infty f(x-y)y^{s-1}\,dy\\
\]

You will notice instantly that:

\[
\frac{d^{-s}}{dx^{-s}} e^x = e^x\\
\]

Now, this is not the entire definition, as the full definition would be written using arcs in \(\mathbb{C}\). In which we write--assuming \(f\) is integrable on \(\gamma\) where \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\) on the Riemann sphere:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_\gamma f(x-y)y^{s-1}\,dy\\
\]

So for example, if we were to take \(f(x) = e^{-x}\), we want to integrate across \([-\infty,0]\), upon which we are given the formula:

\[
\frac{d^{-s}}{dx^{-s}} e^{-x} = \frac{e^{-\pi i s}}{\Gamma(s)} \int_0^\infty f(x+y)y^{s-1}\,dy = e^{-\pi i s} e^{-x}\\
\]

Which, is perfectly generalizable too:

\[
\frac{d^{-s}}{dx^{-s}} e^{\lambda x} = \lambda^{-s} e^{\lambda x}\\
\]

And this is a rigorous operator on a specific space of entire functions \(f\), those which have some sort of decay at \(\infty\).


The relation between this operators has been around for centuries; where we can rewrite Riemann's famous expression for the zeta function as:

\[
\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \sum_{n=1}^\infty e^{-nx} = \zeta(s)\\
\]

As to your question, I am a tad confused, but I believe this does exist, though you would prove it differently. I believe you are absolutely correct--but if memory serves me right, this is a rewording of a known result.

But just for fun, let's prove your result Smile

Okay, so take the function:

\[
\frac{d^{s}}{dx^{s}}\Big{|}_{x=0} f(x)= F(s)\\
\]

And let's take your function:

\[
G(s) = f^{(s)}(0)\\
\]

Which we state that \(H(s) = F(s) - G(s)\), where \(H(n) = 0\) for all \(n\ge 0\). The function \(F(s)\) is exponentially bounded, in such a manner that: \(|F(s)| = O(e^{\rho \Re(s) + \tau |\Im(s)|})\). The value \(\tau \in (0,\pi/2)\) and \(\rho \in \mathbb{R}^+\). Your function \(G(s)\) also exists in this space--I'm a little too lazy to prove it right now, but trust me it is. Therefore \(H(s)\) is also in this space.

The thing is.... If \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) and is holomorphic for \(\Re(s) > 0\), then if \(H(n) = 0\) then \(H=0\). This is what I call the Ramanujan Identity Theorem, as it's a direct corollary of Ramanujan's master theorem.

I can add more details if you like Big Grin 

In short, from my brief analysis you are absolutely correct, and this is an expression for the Exponential Differintegral!

Great job!

EDIT: Also, please note this is just a rough walk through. You do need more elbow grease to iron everything out. Why I didn't answer the MO question, MO asks for a higher standard, and I'm too lazy to work out all the details right now! Rolleyes
Reply
#3
(02/08/2023, 03:38 AM)JmsNxn Wrote:
(02/07/2023, 09:32 PM)Caleb Wrote: I've recently posted a question on mathoverflow which some of you who know fractional calculus may be able to answer: https://mathoverflow.net/questions/44033...-sum-fn-nx
I would appreciate any thoughts on the matter

Thanks, 
Caleb

Hey, Caleb!

What you have essentially detailed is the Exponential Differintegral--or the Riemann Liouville Differintegral. The standard way to right this, is to use the Mellin transform, but there are many different possible expansions for it.

In essence, the way I like to write it is as:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_0^\infty f(x-y)y^{s-1}\,dy\\
\]

You will notice instantly that:

\[
\frac{d^{-s}}{dx^{-s}} e^x = e^x\\
\]

Now, this is not the entire definition, as the full definition would be written using arcs in \(\mathbb{C}\). In which we write--assuming \(f\) is integrable on \(\gamma\) where \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\) on the Riemann sphere:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_\gamma f(x-y)y^{s-1}\,dy\\
\]

So for example, if we were to take \(f(x) = e^{-x}\), we want to integrate across \([-\infty,0]\), upon which we are given the formula:

\[
\frac{d^{-s}}{dx^{-s}} e^{-x} = \frac{e^{-\pi i s}}{\Gamma(s)} \int_0^\infty f(x+y)y^{s-1}\,dy = e^{-\pi i s} e^{-x}\\
\]

Which, is perfectly generalizable too:

\[
\frac{d^{-s}}{dx^{-s}} e^{\lambda x} = \lambda^{-s} e^{\lambda x}\\
\]

And this is a rigorous operator on a specific space of entire functions \(f\), those which have some sort of decay at \(\infty\).


The relation between this operators has been around for centuries; where we can rewrite Riemann's famous expression for the zeta function as:

\[
\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \sum_{n=1}^\infty e^{-nx} = \zeta(s)\\
\]

As to your question, I am a tad confused, but I believe this does exist, though you would prove it differently. I believe you are absolutely correct--but if memory serves me right, this is a rewording of a known result.

But just for fun, let's prove your result Smile

Okay, so take the function:

\[
\frac{d^{s}}{dx^{s}}\Big{|}_{x=0} f(x)= F(s)\\
\]

And let's take your function:

\[
G(s) = f^{(s)}(0)\\
\]

Which we state that \(H(s) = F(s) - G(s)\), where \(H(n) = 0\) for all \(n\ge 0\). The function \(F(s)\) is exponentially bounded, in such a manner that: \(|F(s)| = O(e^{\rho \Re(s) + \tau |\Im(s)|})\). The value \(\tau \in (0,\pi/2)\) and \(\rho \in \mathbb{R}^+\). Your function \(G(s)\) also exists in this space--I'm a little too lazy to prove it right now, but trust me it is. Therefore \(H(s)\) is also in this space.

The thing is.... If \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) and is holomorphic for \(\Re(s) > 0\), then if \(H(n) = 0\) then \(H=0\). This is what I call the Ramanujan Identity Theorem, as it's a direct corollary of Ramanujan's master theorem.

I can add more details if you like Big Grin 

In short, from my brief analysis you are absolutely correct, and this is an expression for the Exponential Differintegral!

Great job!

EDIT: Also, please note this is just a rough walk through. You do need more elbow grease to iron everything out. Why I didn't answer the MO question, MO asks for a higher standard, and I'm too lazy to work out all the details right now!  Rolleyes
Thank you for your insightful response-- I particularly liked the idea that \(\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \zeta(s) \) because it should allow me to recast the question in terms of asking when the fractional derivative operation commutes with other operations (for instance, without actually looking at it, I imagine that granting that the operation can be undistributed from the infinite summation would "prove" the result). 

Your thoughts about \( H(s) = 0 \) are quite similar to my own thoughts-- the bound \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) is interesting. My own go-to bound for uniqueness is Carlson's one that functions of exponential type less than \( \pi \) (i.e. \( H(s) < e^{\pi |z|} \)) gain uniqueness for their definition on the integers. Its interesting to see the bound you use is slightly less poweful on the whole complex plane but allows stronger growth on the real line. That's pretty cool-- I had always suspected there was some way to trade-off growth in the complex plane in exchange for some extra growth on the real line, and its nice to see that such a thing can actually be done.

Actually, I will tell you how this question came about. In passing, I saw one of your posts about the Riemann Liouville Differintegral, and how its values are uniquely determined under sufficent growth conditions. You had claimed in that post that this means the fractional integral is basically unique (with some extra conditions like the derivative of exp is exp). However, the growth conditions are crucial for uniqueness-- for instance, Carelsons theorem for uniqueness of a complex function that agrees on an integer sequence needs to be exponential type less than pi, otherwise \( \sin(\pi z) = 0 \) on the integers. So, without growth conditions, there isn't true uniqueness on interpolation problems, there is typically a bunch of different solutions.  

HOWEVER, in working with divergent series and analytical continuations, I run into sequences which don't satisfy the required growth conditions, however, their analytical continuations depend on the interpolating sequence. In these cases, math itself is forced to pick which interpolating is correct. For instance, one case I considered before is \( \sum (-1)^n n! x^n \). Such a sequence grows far to fast to be unique, but if we choose the most natural option and replace \( (-1)^n \) with \( e^{\pi i z} \) and \( n! \) with \( \Gamma(z+1) \) and then do a contour integration in the complex plane, it gives the correct answer. Why does math "pick" \( \Gamma(z+1) \) over \( \Gamma(z+1) + \sin(\pi z)\)? Or, why doesn't it pick one of the other extends of the gamma function (perhaps one that isn't log convex for instance)? I'd like to know how it makes its decision.

Thus my secret motivation for asking the quesiton on mathoverflow is to study how math makes it choice. The thing is, if we pick the right choice of functions, then 
\[\sum_{n=1}^\infty f(n)n^x- \sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} \zeta(-x-k)\] 
will converge to some specific value, but \(f(n)\) will grow fast enough that uniqueness fails. In this case, math is forced to make a choice, and we can directly compute what choice math has chosen to make in these cases. Put another way, the difference of these two series induces a defintion of fractional integration even past where uniqueness fails. Does it always pick something related to the Riemann-Liouville Differintegral? Are there conditions where it switches to the one of the different defintions of fractional integral? Is the Riemann-Liouville Differintegral a special case of something more general that always converges? Or maybe, does non-uniqueness induce branch cuts on the original functions, and different non-unique chocies correspond to different branches?  Really, I think many of the cool and fascinating functions have fast growth rates, so such questions are neccesary to grapple with if one wants to work with those objects in a serious way. 

 
Reply
#4
(02/08/2023, 04:53 AM)Caleb Wrote:
(02/08/2023, 03:38 AM)JmsNxn Wrote:
(02/07/2023, 09:32 PM)Caleb Wrote: I've recently posted a question on mathoverflow which some of you who know fractional calculus may be able to answer: https://mathoverflow.net/questions/44033...-sum-fn-nx
I would appreciate any thoughts on the matter

Thanks, 
Caleb

Hey, Caleb!

What you have essentially detailed is the Exponential Differintegral--or the Riemann Liouville Differintegral. The standard way to right this, is to use the Mellin transform, but there are many different possible expansions for it.

In essence, the way I like to write it is as:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_0^\infty f(x-y)y^{s-1}\,dy\\
\]

You will notice instantly that:

\[
\frac{d^{-s}}{dx^{-s}} e^x = e^x\\
\]

Now, this is not the entire definition, as the full definition would be written using arcs in \(\mathbb{C}\). In which we write--assuming \(f\) is integrable on \(\gamma\) where \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\) on the Riemann sphere:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_\gamma f(x-y)y^{s-1}\,dy\\
\]

So for example, if we were to take \(f(x) = e^{-x}\), we want to integrate across \([-\infty,0]\), upon which we are given the formula:

\[
\frac{d^{-s}}{dx^{-s}} e^{-x} = \frac{e^{-\pi i s}}{\Gamma(s)} \int_0^\infty f(x+y)y^{s-1}\,dy = e^{-\pi i s} e^{-x}\\
\]

Which, is perfectly generalizable too:

\[
\frac{d^{-s}}{dx^{-s}} e^{\lambda x} = \lambda^{-s} e^{\lambda x}\\
\]

And this is a rigorous operator on a specific space of entire functions \(f\), those which have some sort of decay at \(\infty\).


The relation between this operators has been around for centuries; where we can rewrite Riemann's famous expression for the zeta function as:

\[
\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \sum_{n=1}^\infty e^{-nx} = \zeta(s)\\
\]

As to your question, I am a tad confused, but I believe this does exist, though you would prove it differently. I believe you are absolutely correct--but if memory serves me right, this is a rewording of a known result.

But just for fun, let's prove your result Smile

Okay, so take the function:

\[
\frac{d^{s}}{dx^{s}}\Big{|}_{x=0} f(x)= F(s)\\
\]

And let's take your function:

\[
G(s) = f^{(s)}(0)\\
\]

Which we state that \(H(s) = F(s) - G(s)\), where \(H(n) = 0\) for all \(n\ge 0\). The function \(F(s)\) is exponentially bounded, in such a manner that: \(|F(s)| = O(e^{\rho \Re(s) + \tau |\Im(s)|})\). The value \(\tau \in (0,\pi/2)\) and \(\rho \in \mathbb{R}^+\). Your function \(G(s)\) also exists in this space--I'm a little too lazy to prove it right now, but trust me it is. Therefore \(H(s)\) is also in this space.

The thing is.... If \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) and is holomorphic for \(\Re(s) > 0\), then if \(H(n) = 0\) then \(H=0\). This is what I call the Ramanujan Identity Theorem, as it's a direct corollary of Ramanujan's master theorem.

I can add more details if you like Big Grin 

In short, from my brief analysis you are absolutely correct, and this is an expression for the Exponential Differintegral!

Great job!

EDIT: Also, please note this is just a rough walk through. You do need more elbow grease to iron everything out. Why I didn't answer the MO question, MO asks for a higher standard, and I'm too lazy to work out all the details right now!  Rolleyes
Thank you for your insightful response-- I particularly liked the idea that \(\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \zeta(s) \) because it should allow me to recast the question in terms of asking when the fractional derivative operation commutes with other operations (for instance, without actually looking at it, I imagine that granting that the operation can be undistributed from the infinite summation would "prove" the result). 

Your thoughts about \( H(s) = 0 \) are quite similar to my own thoughts-- the bound \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) is interesting. My own go-to bound for uniqueness is Carlson's one that functions of exponential type less than \( \pi \) (i.e. \( H(s) < e^{\pi |z|} \)) gain uniqueness for their definition on the integers. Its interesting to see the bound you use is slightly less poweful on the whole complex plane but allows stronger growth on the real line. That's pretty cool-- I had always suspected there was some way to trade-off growth in the complex plane in exchange for some extra growth on the real line, and its nice to see that such a thing can actually be done.

Actually, I will tell you how this question came about. In passing, I saw one of your posts about the Riemann Liouville Differintegral, and how its values are uniquely determined under sufficent growth conditions. You had claimed in that post that this means the fractional integral is basically unique (with some extra conditions like the derivative of exp is exp). However, the growth conditions are crucial for uniqueness-- for instance, Carelsons theorem for uniqueness of a complex function that agrees on an integer sequence needs to be exponential type less than pi, otherwise \( \sin(\pi z) = 0 \) on the integers. So, without growth conditions, there isn't true uniqueness on interpolation problems, there is typically a bunch of different solutions.  

HOWEVER, in working with divergent series and analytical continuations, I run into sequences which don't satisfy the required growth conditions, however, their analytical continuations depend on the interpolating sequence. In these cases, math itself is forced to pick which interpolating is correct. For instance, one case I considered before is \( \sum (-1)^n n! x^n \). Such a sequence grows far to fast to be unique, but if we choose the most natural option and replace \( (-1)^n \) with \( e^{\pi i z} \) and \( n! \) with \( \Gamma(z+1) \) and then do a contour integration in the complex plane, it gives the correct answer. Why does math "pick" \( \Gamma(z+1) \) over \( \Gamma(z+1) + \sin(\pi z)\)? Or, why doesn't it pick one of the other extends of the gamma function (perhaps one that isn't log convex for instance)? I'd like to know how it makes its decision.

Thus my secret motivation for asking the quesiton on mathoverflow is to study how math makes it choice. The thing is, if we pick the right choice of functions, then 
\[\sum_{n=1}^\infty f(n)n^x- \sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} \zeta(-x-k)\] 
will converge to some specific value, but \(f(n)\) will grow fast enough that uniqueness fails. In this case, math is forced to make a choice, and we can directly compute what choice math has chosen to make in these cases. Put another way, the difference of these two series induces a defintion of fractional integration even past where uniqueness fails. Does it always pick something related to the Riemann-Liouville Differintegral? Are there conditions where it switches to the one of the different defintions of fractional integral? Is the Riemann-Liouville Differintegral a special case of something more general that always converges? Or maybe, does non-uniqueness induce branch cuts on the original functions, and different non-unique chocies correspond to different branches?  Really, I think many of the cool and fascinating functions have fast growth rates, so such questions are neccesary to grapple with if one wants to work with those objects in a serious way. 

 

These are questions I have asked myself for 12 years Tongue 

STUDY RAMANUJAN'S MASTER THEOREM AND THE MELLIN TRANSFORM!

The answer is so banal, that you may not like it, but I'll give a shot. I'll start with, why does math choose \(\Gamma(z+1)\) from \(n!\).

Because:

\[
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(z) x^{-z} = e^{-x}\\
\]

And:

\[
\int_0^\infty e^{-x} x^{z-1} \, dx = \Gamma(z)\\
\]

Where Ramanujan showed in a rough handed way; where \(H\) is a linear operator (or a function):

\[
\int_0^\infty e^{-Hx} x^{z-1}\,dx = \Gamma(z) H(z)\\
\]

Where then:

\[
e^{-Hx} = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(z) H(z) x^{-z}\,dz\\
\]

And the ONLY SOLUTIONS WHICH WORK ARE THE ONES WHICH HAVE CONVERGENT INTEGRALS!

This relates deeply to \(\sin(\pi z)\) and Carlson's theorem. Carlson's theorem is actually, in an historical perspective, a manner of justifying Ramanujan's Master Theorem in a much much much broader sense. He wanted to justify things Ramanujan just took for granted, lmao. In perfect Ramanujan form Tongue
Reply
#5
(02/08/2023, 06:03 AM)JmsNxn Wrote:
(02/08/2023, 04:53 AM)Caleb Wrote:
(02/08/2023, 03:38 AM)JmsNxn Wrote:
(02/07/2023, 09:32 PM)Caleb Wrote: I've recently posted a question on mathoverflow which some of you who know fractional calculus may be able to answer: https://mathoverflow.net/questions/44033...-sum-fn-nx
I would appreciate any thoughts on the matter

Thanks, 
Caleb

Hey, Caleb!

What you have essentially detailed is the Exponential Differintegral--or the Riemann Liouville Differintegral. The standard way to right this, is to use the Mellin transform, but there are many different possible expansions for it.

In essence, the way I like to write it is as:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_0^\infty f(x-y)y^{s-1}\,dy\\
\]

You will notice instantly that:

\[
\frac{d^{-s}}{dx^{-s}} e^x = e^x\\
\]

Now, this is not the entire definition, as the full definition would be written using arcs in \(\mathbb{C}\). In which we write--assuming \(f\) is integrable on \(\gamma\) where \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\) on the Riemann sphere:

\[
\frac{d^{-s}}{dx^{-s}} f(x) = \frac{1}{\Gamma(s)} \int_\gamma f(x-y)y^{s-1}\,dy\\
\]

So for example, if we were to take \(f(x) = e^{-x}\), we want to integrate across \([-\infty,0]\), upon which we are given the formula:

\[
\frac{d^{-s}}{dx^{-s}} e^{-x} = \frac{e^{-\pi i s}}{\Gamma(s)} \int_0^\infty f(x+y)y^{s-1}\,dy = e^{-\pi i s} e^{-x}\\
\]

Which, is perfectly generalizable too:

\[
\frac{d^{-s}}{dx^{-s}} e^{\lambda x} = \lambda^{-s} e^{\lambda x}\\
\]

And this is a rigorous operator on a specific space of entire functions \(f\), those which have some sort of decay at \(\infty\).


The relation between this operators has been around for centuries; where we can rewrite Riemann's famous expression for the zeta function as:

\[
\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \sum_{n=1}^\infty e^{-nx} = \zeta(s)\\
\]

As to your question, I am a tad confused, but I believe this does exist, though you would prove it differently. I believe you are absolutely correct--but if memory serves me right, this is a rewording of a known result.

But just for fun, let's prove your result Smile

Okay, so take the function:

\[
\frac{d^{s}}{dx^{s}}\Big{|}_{x=0} f(x)= F(s)\\
\]

And let's take your function:

\[
G(s) = f^{(s)}(0)\\
\]

Which we state that \(H(s) = F(s) - G(s)\), where \(H(n) = 0\) for all \(n\ge 0\). The function \(F(s)\) is exponentially bounded, in such a manner that: \(|F(s)| = O(e^{\rho \Re(s) + \tau |\Im(s)|})\). The value \(\tau \in (0,\pi/2)\) and \(\rho \in \mathbb{R}^+\). Your function \(G(s)\) also exists in this space--I'm a little too lazy to prove it right now, but trust me it is. Therefore \(H(s)\) is also in this space.

The thing is.... If \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) and is holomorphic for \(\Re(s) > 0\), then if \(H(n) = 0\) then \(H=0\). This is what I call the Ramanujan Identity Theorem, as it's a direct corollary of Ramanujan's master theorem.

I can add more details if you like Big Grin 

In short, from my brief analysis you are absolutely correct, and this is an expression for the Exponential Differintegral!

Great job!

EDIT: Also, please note this is just a rough walk through. You do need more elbow grease to iron everything out. Why I didn't answer the MO question, MO asks for a higher standard, and I'm too lazy to work out all the details right now!  Rolleyes
Thank you for your insightful response-- I particularly liked the idea that \(\frac{d^{-s}}{dx^{-s}}\Big{|}_{x=0} \frac{e^{-x}}{1+e^{-x}} = \zeta(s) \) because it should allow me to recast the question in terms of asking when the fractional derivative operation commutes with other operations (for instance, without actually looking at it, I imagine that granting that the operation can be undistributed from the infinite summation would "prove" the result). 

Your thoughts about \( H(s) = 0 \) are quite similar to my own thoughts-- the bound \(H(s) = O(e^{\rho \Re(s) + \tau |\Im(s)|})\) is interesting. My own go-to bound for uniqueness is Carlson's one that functions of exponential type less than \( \pi \) (i.e. \( H(s) < e^{\pi |z|} \)) gain uniqueness for their definition on the integers. Its interesting to see the bound you use is slightly less poweful on the whole complex plane but allows stronger growth on the real line. That's pretty cool-- I had always suspected there was some way to trade-off growth in the complex plane in exchange for some extra growth on the real line, and its nice to see that such a thing can actually be done.

Actually, I will tell you how this question came about. In passing, I saw one of your posts about the Riemann Liouville Differintegral, and how its values are uniquely determined under sufficent growth conditions. You had claimed in that post that this means the fractional integral is basically unique (with some extra conditions like the derivative of exp is exp). However, the growth conditions are crucial for uniqueness-- for instance, Carelsons theorem for uniqueness of a complex function that agrees on an integer sequence needs to be exponential type less than pi, otherwise \( \sin(\pi z) = 0 \) on the integers. So, without growth conditions, there isn't true uniqueness on interpolation problems, there is typically a bunch of different solutions.  

HOWEVER, in working with divergent series and analytical continuations, I run into sequences which don't satisfy the required growth conditions, however, their analytical continuations depend on the interpolating sequence. In these cases, math itself is forced to pick which interpolating is correct. For instance, one case I considered before is \( \sum (-1)^n n! x^n \). Such a sequence grows far to fast to be unique, but if we choose the most natural option and replace \( (-1)^n \) with \( e^{\pi i z} \) and \( n! \) with \( \Gamma(z+1) \) and then do a contour integration in the complex plane, it gives the correct answer. Why does math "pick" \( \Gamma(z+1) \) over \( \Gamma(z+1) + \sin(\pi z)\)? Or, why doesn't it pick one of the other extends of the gamma function (perhaps one that isn't log convex for instance)? I'd like to know how it makes its decision.

Thus my secret motivation for asking the quesiton on mathoverflow is to study how math makes it choice. The thing is, if we pick the right choice of functions, then 
\[\sum_{n=1}^\infty f(n)n^x- \sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} \zeta(-x-k)\] 
will converge to some specific value, but \(f(n)\) will grow fast enough that uniqueness fails. In this case, math is forced to make a choice, and we can directly compute what choice math has chosen to make in these cases. Put another way, the difference of these two series induces a defintion of fractional integration even past where uniqueness fails. Does it always pick something related to the Riemann-Liouville Differintegral? Are there conditions where it switches to the one of the different defintions of fractional integral? Is the Riemann-Liouville Differintegral a special case of something more general that always converges? Or maybe, does non-uniqueness induce branch cuts on the original functions, and different non-unique chocies correspond to different branches?  Really, I think many of the cool and fascinating functions have fast growth rates, so such questions are neccesary to grapple with if one wants to work with those objects in a serious way. 

 

These are questions I have asked myself for 12 years Tongue 

STUDY RAMANUJAN'S MASTER THEOREM AND THE MELLIN TRANSFORM!

The answer is so banal, that you may not like it, but I'll give a shot. I'll start with, why does math choose \(\Gamma(z+1)\) from \(n!\).

Because:

\[
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(z) x^{-z} = e^{-x}\\
\]

And:

\[
\int_0^\infty e^{-x} x^{z-1} \, dx = \Gamma(z)\\
\]

Where Ramanujan showed in a rough handed way; where \(H\) is a linear operator (or a function):

\[
\int_0^\infty e^{-Hx} x^{z-1}\,dx = \Gamma(z) H(z)\\
\]

Where then:

\[
e^{-Hx} = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(z) H(z) x^{-z}\,dz\\
\]

And the ONLY SOLUTIONS WHICH WORK ARE THE ONES WHICH HAVE CONVERGENT INTEGRALS!

This relates deeply to \(\sin(\pi z)\) and Carlson's theorem. Carlson's theorem is actually, in an historical perspective, a manner of justifying Ramanujan's Master Theorem in a much much much broader sense. He wanted to justify things Ramanujan just took for granted, lmao. In perfect Ramanujan form Tongue
Hmm, I will think about this some more, I need to ruminate on the ideas. Though to give credence to your claim gamma is the 'smallest' possible extension of n! in the sense that its the unique function that is bounded in verticle strips in the complex plane. So perhaps the choice is always the smallest extension, which would fit with the idea that its the one which has convergent integrals. Also, I think the Master Theorem idea relates to some question I asked on MO a couple months ago-- you might find some of the answers others provided interesting: https://mathoverflow.net/questions/43549...nd-sum-n-1 

I'm happy to hear all of your responses James. With all of your insights I might be able to answer some questions I've been wondering about for a long time now. Thank you for your time in answering my many questions!
Reply
#6
(02/08/2023, 06:19 AM)Caleb Wrote: Also, I think the Master Theorem idea relates to some question I asked on MO a couple months ago-- you might find some of the answers others provided interesting: https://mathoverflow.net/questions/43549...nd-sum-n-1 

Yes, just reading this question I agree with Tom Copeland. He has answered questions of mine before on MO (I've had like 5 throwaways on MO for asking questions like yours Tongue ). I've interacted with him a good amount of times related to Integral Representations, and he's a good person to talk to about anything to do with integral representations. Ramanujan's work is actually a Fourier Transform result--which is at the root of all integral representations. Listen to Tom Copeland...
Reply
#7
(02/08/2023, 06:35 AM)JmsNxn Wrote:
(02/08/2023, 06:19 AM)Caleb Wrote: Also, I think the Master Theorem idea relates to some question I asked on MO a couple months ago-- you might find some of the answers others provided interesting: https://mathoverflow.net/questions/43549...nd-sum-n-1 

Yes, just reading this question I agree with Tom Copeland. He has answered questions of mine before on MO (I've had like 5 throwaways on MO for asking questions like yours Tongue ). I've interacted with him a good amount of times related to Integral Representations, and he's a good person to talk to about anything to do with integral representations. Ramanujan's work is actually a Fourier Transform result--which is at the root of all integral representations. Listen to Tom Copeland...

Im more into Laplace transforms.

I mentioned a few integral transforms at the fake function thread : 

https://math.eretrandre.org/tetrationfor...63&page=21

and other pages there.

Ofcourse we all did, but I wanted to point that out.

***

I think in general for meromorphic (on C ) functions Ramanujan Master theorem gives you the right values.

regards

tommy1729
Reply
#8
(02/08/2023, 12:46 PM)tommy1729 Wrote:
(02/08/2023, 06:35 AM)JmsNxn Wrote:
(02/08/2023, 06:19 AM)Caleb Wrote: Also, I think the Master Theorem idea relates to some question I asked on MO a couple months ago-- you might find some of the answers others provided interesting: https://mathoverflow.net/questions/43549...nd-sum-n-1 

Yes, just reading this question I agree with Tom Copeland. He has answered questions of mine before on MO (I've had like 5 throwaways on MO for asking questions like yours Tongue ). I've interacted with him a good amount of times related to Integral Representations, and he's a good person to talk to about anything to do with integral representations. Ramanujan's work is actually a Fourier Transform result--which is at the root of all integral representations. Listen to Tom Copeland...

Im more into Laplace transforms.

I mentioned a few integral transforms at the fake function thread : 

https://math.eretrandre.org/tetrationfor...63&page=21

and other pages there.

Ofcourse we all did, but I wanted to point that out.

***

I think in general for meromorphic (on C ) functions Ramanujan Master theorem gives you the right values.

regards

tommy1729

Analytic continuation and symmetry are essential see this related issue :

https://math.stackexchange.com/questions...0-ns-n-s-1

And likewise for its sum function.

***

I would like to point out that all this probably relates to continuum sums and continuum products.

the continuum product is problematic around zero's of a function for a logical reason.
By taking log, this implies continu sums are problematic around log sing.

So I assume if a counterexample exists to Caleb question it relates to log singularities.

Natural boundaries are possible issues too because it means it is not defined to sum everywhere ( as the function is not everywhere analytic defined , even when defined there ) 

I think essential singularities are issues too.

Basically I am saying that only the non-analytic points matter, which makes sense since the techniques work for polynomials aka truncated taylor series.

Going around poles is easy in the riemann surface/analytic continuation sense, afterall there is only one path ( not multivalued ), hence this also shows why it works.


regards

tommy1729
Reply
#9
(02/08/2023, 12:55 PM)tommy1729 Wrote:
(02/08/2023, 12:46 PM)tommy1729 Wrote:
(02/08/2023, 06:35 AM)JmsNxn Wrote:
(02/08/2023, 06:19 AM)Caleb Wrote: Also, I think the Master Theorem idea relates to some question I asked on MO a couple months ago-- you might find some of the answers others provided interesting: https://mathoverflow.net/questions/43549...nd-sum-n-1 

Yes, just reading this question I agree with Tom Copeland. He has answered questions of mine before on MO (I've had like 5 throwaways on MO for asking questions like yours Tongue ). I've interacted with him a good amount of times related to Integral Representations, and he's a good person to talk to about anything to do with integral representations. Ramanujan's work is actually a Fourier Transform result--which is at the root of all integral representations. Listen to Tom Copeland...

Im more into Laplace transforms.

I mentioned a few integral transforms at the fake function thread : 

https://math.eretrandre.org/tetrationfor...63&page=21

and other pages there.

Ofcourse we all did, but I wanted to point that out.

***

I think in general for meromorphic (on C ) functions Ramanujan Master theorem gives you the right values.

regards

tommy1729

Analytic continuation and symmetry are essential see this related issue :

https://math.stackexchange.com/questions...0-ns-n-s-1

And likewise for its sum function.

***

I would like to point out that all this probably relates to continuum sums and continuum products.

the continuum product is problematic around zero's of a function for a logical reason.
By taking log, this implies continu sums are problematic around log sing.

So I assume if a counterexample exists to Caleb question it relates to log singularities.

Natural boundaries are possible issues too because it means it is not defined to sum everywhere ( as the function is not everywhere analytic defined , even when defined there ) 

I think essential singularities are issues too.

Basically I am saying that only the non-analytic points matter, which makes sense since the techniques work for polynomials aka truncated taylor series.

Going around poles is easy in the riemann surface/analytic continuation sense, afterall there is only one path ( not multivalued ), hence this also shows why it works.


regards

tommy1729

I linked the ideas of continuum sum , summability methods , analytic continuation and sums yes.

Like the old dying man said in the kung fu movie :

" Remember Billy , all styles and techniques are related "

I believe this to be true for math too.


regards

tommy1729
Reply
#10
(02/08/2023, 06:03 AM)JmsNxn Wrote: These are questions I have asked myself for 12 years Tongue 

STUDY RAMANUJAN'S MASTER THEOREM AND THE MELLIN TRANSFORM!

The answer is so banal, that you may not like it, but I'll give a shot. I'll start with, why does math choose \(\Gamma(z+1)\) from \(n!\).

Because:

\[
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(z) x^{-z} = e^{-x}\\
\]

And:

\[
\int_0^\infty e^{-x} x^{z-1} \, dx = \Gamma(z)\\
\]

Where Ramanujan showed in a rough handed way; where \(H\) is a linear operator (or a function):

\[
\int_0^\infty e^{-Hx} x^{z-1}\,dx = \Gamma(z) H(z)\\
\]

Where then:

\[
e^{-Hx} = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(z) H(z) x^{-z}\,dz\\
\]

And the ONLY SOLUTIONS WHICH WORK ARE THE ONES WHICH HAVE CONVERGENT INTEGRALS!

This relates deeply to \(\sin(\pi z)\) and Carlson's theorem. Carlson's theorem is actually, in an historical perspective, a manner of justifying Ramanujan's Master Theorem in a much much much broader sense. He wanted to justify things Ramanujan just took for granted, lmao. In perfect Ramanujan form Tongue

When I think about this some more, I think 
Quote:"And the ONLY SOLUTIONS WHICH WORK ARE THE ONES WHICH HAVE CONVERGENT INTEGRALS!"
Is not a principal which seems to work in general. If \( \Gamma \) is defined in terms of unique thing that has convergent integrals, then \( \frac{1}{n!} \) grows pretty large in the imaginary direction, but \( \frac{1}{n! + 2\sin(\pi z)}\) is quite small and so it will lead to convergent integrals. Are there some other conditions you have in mind in general? 

To be more particular, if I have a specific sequence \( a_n \) on the integers, what conditions are you proposing determines the unique analytical continuation? Or perhaps, if I have a sequence \( a_n \) and an analytical continuation \( A(z) \), what specific conditions determine if this is the right analytical continuation? In the case of \( a_n = \frac{1}{n!} \), what do your conditions say is the right answer? (Or, perhaps is more information needed besides just the sequence to determine the right analytical continuation?)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Fractional tetration method Koha 2 6,026 06/05/2025, 01:40 AM
Last Post: Pentalogue
  ChatGPT checks in on fractional iteration. Daniel 0 3,418 05/17/2023, 01:48 PM
Last Post: Daniel
  Bridging fractional iteration and fractional calculus Daniel 8 9,162 04/02/2023, 02:16 AM
Last Post: JmsNxn
  Discussing fractional iterates of \(f(z) = e^z-1\) JmsNxn 2 4,734 11/22/2022, 03:52 AM
Last Post: JmsNxn
  Fibonacci as iteration of fractional linear function bo198214 48 55,699 09/14/2022, 08:05 AM
Last Post: Gottfried
  The iterational paradise of fractional linear functions bo198214 7 9,931 08/07/2022, 04:41 PM
Last Post: bo198214
  Describing the beta method using fractional linear transformations JmsNxn 5 8,635 08/07/2022, 12:15 PM
Last Post: JmsNxn
  Apropos "fix"point: are the fractional iterations from there "fix" as well? Gottfried 12 15,047 07/19/2022, 03:18 AM
Last Post: JmsNxn
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 53,097 06/11/2022, 12:24 PM
Last Post: tommy1729
  Modding out functional relationships; An introduction to congruent integration. JmsNxn 3 6,601 06/23/2021, 07:07 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)