Posts: 279
Threads: 94
Joined: Aug 2007
I am looking at writing a follow up to
R. Aldrovandi and L. P. Freitas,
Continuous iteration of dynamical maps.
I want to understand the connection between Bell matrices and partial Bell polynomials. I think I am ready to work on the issue of reconciling Gottfried's work with mine.
Daniel
Posts: 901
Threads: 130
Joined: Aug 2007
11/22/2022, 04:19 PM
(This post was last modified: 11/22/2022, 04:21 PM by Gottfried.)
(11/22/2022, 03:06 PM)Daniel Wrote: I am looking at writing a follow up to
R. Aldrovandi and L. P. Freitas,
Continuous iteration of dynamical maps.
I want to understand the connection between Bell matrices and partial Bell polynomials. I think I am ready to work on the issue of reconciling Gottfried's work with mine.
Nice! If I can I'd like to help. As far as I have seen, my ansatz is simply to use the Aldrovandi's ideas resp. Bell-matrices / partial Bell-polynomials, but because of the jungle of indexes and notations I'd given up to construct an explicite 1:1-representation of the (carleman-) matrices and the representations in Wikipedia or in Aldrovandi's article(s). For instance, my matrices (and their notation) are never thought as for multivariate polynomials in \(x_1,x_2,x_3,...,x_n\) but on series on one variable \(x\) only and a (infinite) set of constant coefficients, sloppily indicated as \( [a,b,c,...] \) or a bit more straight as \( a_0,a_1,a_2,...\) (For the fractional iteration which includes a second parameter \( h \) for the iteration height there are mostly polynomials in \( h \) with order dependend in the exponent at \(x\) instead of constant coefficients \( [a,b,c,...] \), but this is not important at the moment).
Only very rarely I looked at polynomials in \( x \) (having only finitely many coefficients) but mostly at series. For the latter, the numerical examinations cannot simply assume truncation to matrices of finite size, but must explicitely check for effects which might occur, when the Carlemanmatrix assumes infinite size.
A simple example for such effects is the problem of diagonalization of the Pascal-/Binomial-matrix, say \(P\), which provides the most basic operation on a "Vandermonde"-vector \([1,x,x^2,x^3,...] \to [1,x+1,(x+1)^2,(x+1)^3,...] \). For iterates, we can simply use the powers of \(P\). Even we can implement fractional iteration of this, since the fractional power of \( P \) can be expressed easily. But while the finite version of \(P\) can *not* be diagonalized, the infinite version can, and this might imply properties which are undiscussed when we only extrapolate from the finite version.
I think it shall be impossible/unreadable-for-human-eye to express this latter, say "advanced", options with the given rich-index-language as in WP or Aldrovandi.
So, if I can, I'd like to help, but I likely shall not be able to navigate through the mentioned WP's or Aldrovandi's notational conventions...
Kind regards -
Gottfried
Gottfried Helms, Kassel
Posts: 1,214
Threads: 126
Joined: Dec 2010
11/25/2022, 01:50 AM
(This post was last modified: 11/25/2022, 03:10 AM by JmsNxn.)
I'd just like to add a point, as I have read this paper, and read it a while ago.
This is the prime example I'd say of modeling using Heisenberg. Versus modeling using Schrodinger.
We are modeling a space of coefficients \(\{a_j\}_{j=0}^\infty \in \mathbb{C}\) as we apply flow matrices \(H^t A\) where the object \(A = (a_0,a_1,...,a_j,...)\); and \(H^t\) is a matrix semi group, \(H^t A = (b_0(t),b_1(t),...,b_j(t),...)\).
I am not trying to say any of Daniel's or Gottfried's deliberations are wrong. I'm trying to say, that we can do the same thing with integrals. Schrodinger's approach is the same thing as Heisenberg--it's just a matter of language.
To begin we project this space:
\[
\begin{align}
\sigma(A) &= \sum_{j=0}^\infty a_j z^j\\
\sigma(H^tA) &= \sum_{j=0}^\infty b_j(t)z^j\\
\end{align}
\]
The above matrix solution works perfectly. And what both of you guys are talking about is perfect. The difference I would say, is that much of this can be represented with Fourier transforms/integral transforms. And that ultimately, we are saying the same thing.
This is absolutely seen most obviously, going back about a hundred years to the great war. Ramanujan had written this odd equation which is:
\[
\Gamma(t)H^{-t} = \int_0^\infty \left(\sum_{n=0}^\infty H^n \frac{(-x)^n}{n!}\right)x^{t-1}\,dx\\
\]
This holds for square matrices/infinite matrices/general linear operators \(H\). Not to mention it holds for holomorphic functions \(H(t) : \mathbb{C}_{\Re(t) > 0} \to \mathbb{C}\) which are appropriately bounded (Carlson would go on to win a Fields' medal because he rigorously proved this result with no exception).
This relates perfectly to fractional calculus, because: the Riemann-Liouville Differintegral/ the Exponential Differintegral is written as:
\[
\frac{d^{-z}}{dx^{-z}}\Big{|}_{x=0} \vartheta(x) = \frac{1}{\Gamma(z)}\int_0^\infty \vartheta(-x)x^{z-1}\,dx\\
\]
Where, it owes its name to:
\[
\frac{d^{-z}}{dx^{-z}} e^x = e^x\\
\]
So if we apply this differintegral; then:
\[
\vartheta(x) = \sum_{n=0}^\infty H^n\frac{x^n}{n!} = e^{Hx}\\
\]
Now when we apply the differintegral:
\[
\frac{d^{-z}}{dx^{-z}} e^{Hx} = H^{-z}e^{Hx}\\
\]
The majority of this appears in Ramanujan's notebooks; and is additionally used in a lot of mathematics--but primarily number theory. I just want to play devil's advocate and rejustify what you guys are already seeing. Please remember that \(H\) is an operator on a hilbert space, or a linear operator on a vector space. Which are \(n\times n\) or \(\infty \times \infty\) scenarios.
So \(H\) acts on something, let's say the sequence \(A = (a_0,a_1,...,a_j,...)\). If \(H\) is well enough behaved, then \(H^{-z}\) is discoverable through this integral transform. And what's more, it's exactly what you guys are talking about. It's just integrals instead of infinite matrices. Heisenberg vs Schrodinger.
We can write:
\[
H^{-z} \textbf{v} = H^{-z} (v_0,v_1,...,v_j,...) = (u_0(-z),u_1(-z),...,u_j(-z),...)\\
\]
Or we can write:
\[
\int_0^\infty e^{-Hx}x^{z-1}\,dx \textbf{v}\\
\]
Where both produce the same result!
Either way, love you guys. Thanks for reminding me of this paper, Daniel
Regards, James.
EDIT:
As you guys might not get what I mean by "Heisenberg vs. Schrodinger" I'll give a little history lesson.
The idea of infinite square matrices was invented by Heisenberg. Where these infinite square matrices acted on an infinite vector space (this is all countable). Heisenberg talked about how there were eigen values to these well developed matrices. The matrices were measurables of momentum/position; discrete measurements you could apply to a vector. Hence, Heisenberg's inequality \(PQ - QP < \hbar/2\). This is heisenberg's construction of quantum physics.
Schrodinger is all about waves. It's all about functions which look like waves. And these waves oscillate, and have the same frequencies as the infinite vector space. So we take \(\mathbf{v} = (v_0,v_1,...,v_j,...)\), and translate it into a wave \(\nu(x)\). Now, in Schrodinger's language, we can just take integrals of \(\nu\) which do the same thing as Heisenberg, just a different language.
There were feuds about this for a good while. People couldn't believe both mathematicians/physicists were saying the same thing. Until Von Neumann coined the term "Hilbert Space". Which then showed that both constructions were equivalent if we viewed "infinite matrices like wave functions". This was based primarily off of Hilbert's study of waves, and integral operators on waves.
I'm just trying to stream line everything, boys.
Posts: 279
Threads: 94
Joined: Aug 2007
Thanks Gottfried. It turns out that Androvandi and Freitas' paper has a nice section on Bell polynomials before it moves on to Bell matrices. So I already have the connection between partial Bell polynomials and Bell matrices that I can reference.
JmsNxn, thanks for the clarification of the connection between integrals and operators. As a habit I try and look at new ideas and ask if I am seeing something that can easily be generalized. I am comfortable with operators, but I have no background in integral transforms. I'm aware of the connection between Heisenberg matrices and hyperbolic flows. My first technical job was as a seismologist back in the late Seventies, I ate and drank waveforms, convolution filters and such.
While I have little formal math, I attempt to look at things in the context of Banach and Fréchet space. My specialty is working with total partitions, the combinatoric structure of iterated functions. I need to understand umbral calculus better as I believe there is a neat representation of iterated functions there. Also I think Hopf algebra is important, but beyond me at the moment. Definitely interested in the mathematics developed for QM and QFT.
I'd write more but I am in the guts of writing a paper. Once, again, thanks and best wishes.
Daniel
Posts: 1,214
Threads: 126
Joined: Dec 2010
(11/28/2022, 12:37 PM)Daniel Wrote: I need to understand umbral calculus better as I believe there is a neat representation of iterated functions there.
Skip umbral calculus and go straight to Ramanujan. I've studied both, and umbral calculus is just a naive, early, little rigor, version of Ramanujan's work. Ramanujan, singlehandedly, showed how right umbral calculus was, but additionally did it with new tools. Upon which, more rigorous mathematicians soldified all of Ramanujan's observations. A lot of umbral calculus is "coincidences we can't explain"--Ramanujan gave the explanation.
Posts: 279
Threads: 94
Joined: Aug 2007
(11/30/2022, 01:13 AM)JmsNxn Wrote: (11/28/2022, 12:37 PM)Daniel Wrote: I need to understand umbral calculus better as I believe there is a neat representation of iterated functions there.
Skip umbral calculus and go straight to Ramanujan. I've studied both, and umbral calculus is just a naive, early, little rigor, version of Ramanujan's work. Ramanujan, singlehandedly, showed how right umbral calculus was, but additionally did it with new tools. Upon which, more rigorous mathematicians soldified all of Ramanujan's observations. A lot of umbral calculus is "coincidences we can't explain"--Ramanujan gave the explanation.
JmsNxn, I'm really up for hearing more about Ramanujan's approach. I have volume 2 and 5 of his notebooks. As far as umbral calculus, my understanding is that Rota put it on solid ground using operator theory and Sheffer sequences. Down with binomial hegemony, liberate all the calculus'.
Daniel
Posts: 1,214
Threads: 126
Joined: Dec 2010
(11/30/2022, 02:06 AM)Daniel Wrote: (11/30/2022, 01:13 AM)JmsNxn Wrote: (11/28/2022, 12:37 PM)Daniel Wrote: I need to understand umbral calculus better as I believe there is a neat representation of iterated functions there.
Skip umbral calculus and go straight to Ramanujan. I've studied both, and umbral calculus is just a naive, early, little rigor, version of Ramanujan's work. Ramanujan, singlehandedly, showed how right umbral calculus was, but additionally did it with new tools. Upon which, more rigorous mathematicians soldified all of Ramanujan's observations. A lot of umbral calculus is "coincidences we can't explain"--Ramanujan gave the explanation.
JmsNxn, I'm really up for hearing more about Ramanujan's approach. I have volume 2 and 5 of his notebooks. As far as umbral calculus, my understanding is that Rota put it on solid ground using operator theory and Sheffer sequences. Down with binomial hegemony, liberate all the calculus'.
Ramanujan's approach to everything he did, was analysing sums as integral transforms. His work on modular forms, for example, involves a complicated transformation of a discrete sequence, which when summed, can be integrated in such a manner, that you have 50% of the theory of modular forms. When it comes to umbral calculus; Ramanujan, and people near this idea gave credence to a lot of the classical knowledge. I mean, umbral calculus was always right, but it was never proven (especially the basic ideas).
For example. Let's take a holomorphic function \(f(z) : \{\Re(z) > 0\} \to \{\Re(z) > 0\}\). Now, let's take a common expansion in umbral calculus, which does work. Let's write:
\[
\begin{align}
\Delta f(z) &= f(z+1) - f(z)\\
\Delta^n f(z) &= \Delta \Delta^{n-1}f(z)\\
\end{align}
\]
Let's write now \((z)_n\) as the pochammer symbol, wherein:
\[
\Delta (z)_n = n(z)_{n-1}\\
\]
Where from here, we write one of the cornerstones of Umbral calculus:
\[
f(z) = \sum_{n=0}^\infty \left(\Delta^n f(0)\right)\frac{(z)_n}{n!}\\
\]
This is what's known as a Newton series. It shouldn't be knew to many people. But actually proving this thing exists and converges is a difficult thing.
Now, this is a defining identity of much of umbral calculus (and if you have this, many of the traditional umbral results follow similarly). The trouble is, this "newton series" is very hard to sum, and it's unclear when it is summable. For this, we turn to Mellin/Ramanujan -- Hell, even Riemann used similar ideas. (I think one of the greatest problem with this series, especially throughout history, is that it does converge; but it converges really god damn slow.)
Ramanujan's identification between the Mellin transform and difference equations, continues this in a much better way. We start by saying \(f(z) = O(e^{\rho|\Re(z)| + \tau|\Im(z)|})\) and we restrict \(|\tau| < \pi/2\). Which means we have exponential growth, but the imaginary growth is less that \(\pi/2\). Then, by nature of Ramanujan, there exists:
\[
\vartheta(x) = \sum_{n=0}^\infty f(n)\frac{(-x)^n}{n!}\\
\]
Where this object is holomorphic. And additionally:
\[
\int_0^\infty \vartheta(x)x^{z-1}\,dx = \Gamma(z) f(-z)\\
\]
Where \(\Gamma\) is Euler's Gamma function. From this sole manipulation, we can construct Newton's series, and not only that; justify it rigorously. Let's write:
\[
\mathcal{F} \vartheta = \frac{1}{\Gamma(z)}\int_0^\infty e^{-x} \vartheta(x)x^{z-1}\,dx\\
\]
This can be writ:
\[
\frac{d^{-z}}{dx^{-z}}\Big{|}_{x=0} e^{-x}\vartheta(x)\\
\]
If I write:
\[
\mathcal{F}\frac{d}{dx}\vartheta(x) = \Delta\mathcal{F}\\
\]
This should be apparent. If I write:
\[
\mathcal{F}x\vartheta(x) = s \mathcal{F}\left\{\vartheta(x)\right\}(s+1)\\
\]
This should equally be apparent.
We get the beautiful identity:
\[
\frac{1}{(n-1)!} \int_0^\infty e^{-x}\vartheta(x)x^{n-1}\,dx = \Delta^{n} f(0)\\
\]
So now, let's do the straightforward transformation:
\[
\begin{align}
f(-z) &= \frac{1}{\Gamma(z)} \int_0^\infty \vartheta(x)x^{z-1}\,dx\\
&= \frac{1}{\Gamma(z)} \int_0^\infty e^xe^{-x}\vartheta(x)x^{z-1}\,dx\\
&= \sum_{n=0}^\infty \frac{1}{n!} \frac{1}{\Gamma(z)}\int_0^\infty e^{-x}\vartheta(x)x^{n+z-1}\,dx\\
&= \sum_{n=0}^\infty \Delta^n f(0) \frac{(-z)_n}{n!}\\
\end{align}
\]
Now, this stuff has been around since the 18th century. The integral transforms are old. But Ramanujan really set it in stone. Quite literally, Daniel; it's just a \(e^{x} e^{-x} = 1\), while expanding these things underneath an integral. There is still some work to do here, but once you have an expansion for a newton series; you can do everything in umbral calculus.
All I'm trying to say.
Happy to keep talking, I have not read the references you suggested. But nothing less than love
Posts: 374
Threads: 30
Joined: May 2013
Posts like this by James are really gold, I'd like to collect them and make some pdf, so that they will be saved by possible problems on the forum. There is alot to study here.
Right now I just can make a quick superficial comment: this makes me think of this conversation we had one year ago about the analogy sums:integral=omega notation:composition integral.
This makes me wonder if we can add the missing columns of the analogy
\(\displaystyle \sum / \int \,\sim\, \Omega / \int ...\bullet z\)
\(\Delta/ D \,\sim\, ?? / ??\)
\(n! / n! \,\sim\, ?? / ??\)
\(2^x / e^x \,\sim\, ?? / ??\)
\((x)_n / x^n \,\sim\, ??/??\)
\(Newton/ Taylor \, \sim \, ??/??\)
\(\Delta^n/ {\frac{d^z}{d^zx}} \,\sim\, ?? / ??\) fractional calculus?
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 279
Threads: 94
Joined: Aug 2007
Thanks JmsNxn, awesome stuff. I've reviewed it twice already and now I'm ready to closely review it line by line.
Daniel
Posts: 1,214
Threads: 126
Joined: Dec 2010
12/16/2022, 04:20 AM
(This post was last modified: 12/16/2022, 04:26 AM by JmsNxn.)
(12/10/2022, 10:07 PM)MphLee Wrote: Posts like this by James are really gold, I'd like to collect them and make some pdf, so that they will be saved by possible problems on the forum. There is alot to study here.
Right now I just can make a quick superficial comment: this makes me think of this conversation we had one year ago about the analogy sums:integral=omega notation:composition integral.
This makes me wonder if we can add the missing columns of the analogy
\(\displaystyle \sum / \int \,\sim\, \Omega / \int ...\bullet z\)
\(\Delta/ D \,\sim\, ?? / ??\)
\(n! / n! \,\sim\, ?? / ??\)
\(2^x / e^x \,\sim\, ?? / ??\)
\((x)_n / x^n \,\sim\, ??/??\)
\(Newton/ Taylor \, \sim \, ??/??\)
\(\Delta^n/ {\frac{d^z}{d^zx}} \,\sim\, ?? / ??\) fractional calculus?
Oh God, Mphlee! Don't even get my motor running on this. It just hurts my head thinking about it. There's definitely something like this. But you're trying to build the roof of a house before you have the walls, lmao  Don't distract me with what could be! lmao! I only ever got a rough fourier transform for the compositional integral, I cannot do this with Ramanujan's theorem! Would definitely be super cool! And there's definitely something like this somewhere in there.
(12/10/2022, 10:11 PM)Daniel Wrote: Thanks JmsNxn, awesome stuff. I've reviewed it twice already and now I'm ready to closely review it line by line. 
Just here to help, Daniel!  I may have fucked up some indices too  , so just remember it's correct, I may have just fucked up some variable changes, lol. Too busy to double check every single number, but the main idea is absolutely true!!!
I managed to iterate:
\[
\Delta^s f(z)\\
\]
That's in this paper from when I was an undergrad:
https://arxiv.org/abs/1503.06211
There are many more details on this Ramanujan \(\frac{d}{dz} \to \Delta\) correspondence.
|