Posts: 1,214
Threads: 126
Joined: Dec 2010
05/28/2021, 10:34 PM
(This post was last modified: 05/28/2021, 10:35 PM by JmsNxn.)
Yes, I know a good amount of italians; none of them can speak italian though, lol.
I added an edit, but you beat me to the chase,
And their main interest wasn't exactly iterating matrices. But rather iterating kernels,
\(
\mathcal{K} f = \int K(x,y)f(y)\,dy\\
\)
Which once you do the whole eigen decomposition bologna; and you're on a hilbert space; it's equivalent to iterating a matrix. Or some blather like that. God, I hate matrices.
Posts: 376
Threads: 30
Joined: May 2013
05/28/2021, 10:58 PM
(This post was last modified: 05/28/2021, 11:03 PM by MphLee.)
Man, I guess I'm supposed to understand that integral formula but I can't. Can you elaborate a little bit? That K function confuses me.
About iterating kernels... If you read my story about ranks what I'm doing, in some sense, is iterating a kind of kernel.
And the kernel is a preimage under the matrix so iterating kernel is iterating matrix even without the eigentheory "bologna" xD..
Edit. To be clear if \( f\mapsto \Sigma_s(f)=fsf^{1} \) is the operator the "kernels" are \( f\mapsto [s,f] \). Do you remember when you were telling me that that operator "it is not as well defined as you think" back in 2014?
It is like indefinite integral (antiderivative). It is a preimage inverse to differentiation. The same way we define preimages and kernels.
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
Hey, Mphlee,
I presume you're confused by the term "kernel." As mathematicians are rather terrible at naming things; this is easily confused for the 100 other uses in the english language for the word kernel. As I'm kind of bored, I'll take you through the deep history and what I mean.
A "kernel" of a "linear operator" in hilbert spaces/functional analysis/operator theory is loosely related to matrices, but not exactly. Say you have a linear operator,
\(
\mathcal{K} : \mathcal{H} \to \mathcal{H}\\
\)
Where \( \mathcal{H} \) is a linear functional space. Let's just say Hilbert space for simplicity; that means it has a norm, an inner product, is sequentially complete, and a whole bunch of other nice things. Let's also assume this operator is, so to speak, "nice." There are plenty of operators that are "nice" (Hermitian is a good enough restriction, for example; so let's just say it's Hermitian). Then, there are canonically two ways of representing this operator.
The first is the matrix manner (interestingly, this is Heisenberg's interpretation vs Schrodinger's interpretation, which is resolved by Von Neumann's martian mathematics). Take a sequence of orthonormal eigenvectors \( v_j \) (since this linear operator is Hermitian, we can always find such a base if we assume the Hilbert space is of the first order (isomorphic to \( \ell^2 \)/second countable)). This means, for every \( x \in\mathcal{H} \),
\(
x = \sum_j (x,v_j) v_j\\
\)
Where the brackets are the inner product. Now, we've assumed that,
\(
\mathcal{K} v_j = \lambda_j v_j\\
\)
So the first representation of this operator is,
\(
\mathcal{K}x = \sum_{j} \lambda_j(x,v_j) v_j\\
\)
By just applying the operator termwise. Which is, as I'm sure your familiar, just changing the coordinate system and diagonalizing the operator \( \mathcal{K} \). This leads to a whole study of diagonalizing linear operators, and changing coordinate systems with matrices of infinite dimension; looking for orthonormal vectors, and conjugating by matrices is this realm.
The second realm, can't remember for the life of me who proved it, but there exists a representation of this operator \( \mathcal{K} \) as an integral. This is usually proved with the spectral theorem, and a bunch of crazy nonsense I need to reread to properly quote. But it certainly exists for Hermitian operators. So permit me to call the function \( K(a,b) : S \times S \to \mathbb{C} \) where every element \( x \) of \( \mathcal{H} \) takes \( S \to \mathbb{C} \).
Then, if \( K \) is the kernel of the operator \( \mathcal{K} \), we get,
\(
\mathcal{K}x(a) = \int_{S} K(a,b)x(b)\,db\\
\)
This is known as the integral representation of a linear operator. Particularly, if you ever talk about hilbert spaces with people, by "kernel of a linear operator" they mean \( K \). This representation is usually the best representation. I mean, can you imagine the Fourier Transform represented in the above matrix method. Sounds god awful (Heisenberg, what were you thinking!?). So moving forward, what I mean was simple.
If you take a Hermitian matrix (whose, thankfully) eigenvalues \( \lambda_j \) are real. Let's assume that they're countable (our Hilbert space is second countable so this is okay). Let's assume that \( \lambda_j > 0 \). And let's let our eigenvectors \( v_j \) be orthonormal (they're already orthogonal because they're Hermitian, so normalizing is just multiplying by a constant). So our operator, again is,
\(
\mathcal{K}x = \sum_j \lambda_j(x,v_j)v_j\\
\)
But additionally,
\(
\mathcal{K}^z x = \sum_j \lambda_j^z(x,v_j)v_j\\
\)
AHA! We've iterated the operator and turned it into a semigroup (Von Neumann did masterful stuff using the same principle, but he was a master). But, how do we do this if \( v_j,\lambda_j \) are nonobvious? How do we do this having minimal understanding of the operator, and only an integral operator........................?
Well what that stupid little Matrix thing (which U of T and other professors pointed out, I never really thought it would be that important) kind of says we can just use the fractional derivative.
And that,
\(
\vartheta(w)x = \sum_{n=0}^\infty \mathcal{K}^n\frac{w^n}{n!} = e^{\mathcal{K}w}x\\
\)
Satisfies,
\(
\frac{d^z}{dw^z}_{w=0} \vartheta(w)x = \mathcal{K}^z x\\
\)
I mean, it's evident, each \( \lambda_j^z \) is in \( \mathbb{E}_\theta \).....
That's more so what the U of T people and other professors were talking about last I heard. I don't want to say much more though. Don't really know how priority works, talking about other people's research before publication. I'm not taking credit for any of that. I never really thought of it like that. I'm more interested in the cool kids operators, like hyperoperators/iteration operators. Screw matrix blather.
Posts: 376
Threads: 30
Joined: May 2013
05/29/2021, 01:13 AM
(This post was last modified: 05/29/2021, 01:24 AM by MphLee.)
Omg... At first I was starting panicking... you were not talking about kernel of operators. Everywhere in algebra, linear algebra, functional analysis, matrix theory, groups, monoids and everywhere, even Hilb the cat. of hilbert spaces, kernel has an universal meaning: it is the set of vectors on which the operator vanishes. It is the set of zeroes of the operator.
In fact you're talking about the integral kernel or Hilbert–Schmidt kernel of an operator!!!
Damn, you are pushing my knowledge always to my limits. I'm seriously shocked by this actually. I was COMPLETELY unaware of this integral presentation and I wasn't expecting that hit... I mean I was wondering since day one what a (infinity x infinity) matrix would look like but damn... this is far worse than your Cauchy blackmagic.
Heisenberg vs Schrodinger has to do with some very fundamental duality in mathematics, I red it somewhere multiple times... So if now you are telling me that on one side of the coin there is the matrix formalism for iteration... and on the other side there are integrals.... I'm done. I can feel that this links up heavily with the composition integral too...
My head is exploding... I feel like I can see the holy grail... that's heavy stuff for our iteration business.
Those damn quantummechanics guys were fkn paranormal 0.0
ps:
About priorities... idk too. I know less than you about how this works... But I can't unsee what you hinted 0.0
But I'm sure I won't be dangerous for the careers of your colleagues at U of T xD hahah I'll never publish something worthy.
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
07/07/2021, 07:35 AM
(This post was last modified: 07/07/2021, 07:39 AM by JmsNxn.)
(05/29/2021, 01:13 AM)MphLee Wrote: Omg... At first I was starting panicking... you were not talking about kernel of operators. Everywhere in algebra, linear algebra, functional analysis, matrix theory, groups, monoids and everywhere, even Hilb the cat. of hilbert spaces, kernel has an universal meaning: it is the set of vectors on which the operator vanishes. It is the set of zeroes of the operator.
In fact you're talking about the integral kernel or Hilbert–Schmidt kernel of an operator!!!
Damn, you are pushing my knowledge always to my limits. I'm seriously shocked by this actually. I was COMPLETELY unaware of this integral presentation and I wasn't expecting that hit... I mean I was wondering since day one what a (infinity x infinity) matrix would look like but damn... this is far worse than your Cauchy blackmagic.
Heisenberg vs Schrodinger has to do with some very fundamental duality in mathematics, I red it somewhere multiple times... So if now you are telling me that on one side of the coin there is the matrix formalism for iteration... and on the other side there are integrals.... I'm done. I can feel that this links up heavily with the composition integral too...
My head is exploding... I feel like I can see the holy grail... that's heavy stuff for our iteration business.
Those damn quantummechanics guys were fkn paranormal 0.0
ps:
About priorities... idk too. I know less than you about how this works... But I can't unsee what you hinted 0.0
But I'm sure I won't be dangerous for the careers of your colleagues at U of T xD hahah I'll never publish something worthy.
I thought I'd add that of course you know what I just wrote now. I just don't want to over elaborate to spoil someone else's research. So I don't want to talk about it more. But my paper is an example of a "proof of concept" that we can iterate iterates. As much of this is my research so idc; but it's good to remember the following.
If \( F = \alpha \uparrow^2 z \in E_\theta \) then,
\(
\alpha \uparrow^2 z= F(z) = \frac{d^z}{dw^z}_{w=0} \sum_{n=0}^\infty F(n)\frac{w^n}{n!}\\
\alpha \uparrow^2 \alpha \uparrow^2 z= F^{\circ 2}(z)= \frac{d^z}{dw^z}_{w=0}\sum_{n=0}^\infty F^{\circ 2}(n)\frac{w^n}{n!}\\
\alpha \uparrow^2 \alpha \uparrow^2 \alpha \uparrow^2 z= F^{\circ 3}(z)= \frac{d^z}{dw^z}_{w=0}\sum_{n=0}^\infty F^{\circ 3}(n)\frac{w^n}{n!}\\
\vdots\\
F^{\circ k}(z) = \frac{d^z}{dw^z}_{w=0}\sum_{n=0}^\infty F^{\circ k}(n)\frac{w^n}{n!}\\
\)
And if we now sum across \( k \),
\(
\text{tet}_\alpha^{s}(z) = \frac{d^s}{du^s}_{u=0}\frac{d^z}{dw^z}_{w=0} \sum_{k=0}^\infty \sum_{n=0}^\infty F^{\circ k}(n)\frac{u^kw^n}{k!n!}\\
\)
And now we can repeat this process to get pentation, then hexation, so on and so forth because each one belongs in \( E_\theta \). This creates what is more what they mean by hyperoperation chain; we add variables, not just complexity. And then... we attempt to make an \( \omega \) kinda ordinal move and start doing,
\(
\alpha \uparrow^s z = \frac{d^{s2}}{du^{s2}}_{u=0} \frac{d^z}{dw^z}_{w=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{k+2} n \frac{u^kw^n}{k!n!}\\
\)
Now this is a similar construction of a "hyperoperator" chain. And there's a sneaky lemma to make this work. But remember, this is technically a product,
\(
\prod_n \frac{d^z}{dw_n^z}\\
\)
Or something like that... Honestly Mphlee, it's on the verge of being solved. I'm pissed I'm not the one to do it; but, it's there. This is again, though; more applicable to Hilbert spaces than I care to admitbecause I missed most of that entirely... Worst part about smart people looking at your research is when they elaborate about something right in front of your nose, lol
But there's a diagonalization process being encoded analytically here; if you can't see it, I'll write out the sums upon sums upon sums which elaborate. This is an infinite infinite sum; remember that. There are an infinite number of nested infinite sums. lol. I tend to brush that off but it's really cool to think about.
Regards, James
Btw, I believe the notation they had was,
\(
\alpha \uparrow^n (z_1,z_2,...,z_n) = \alpha \uparrow^n \overline{\bf{z}}\\
\)
Where,
\(
\alpha \uparrow^{n1}(z_1,z_2,...,z_{n1}) = \alpha \uparrow^n (z_1,z_2,...,z_{n1},1)\\
\)
And,
\(
\alpha \uparrow^{n1} (z_1,...,\alpha \uparrow^{n}(z_1,...,z_{n1},z_n)) = \alpha \uparrow^{n} (z_1,...,z_{n1},z_n+1)\\
\)
It was something like that...
So when we make \( \alpha\uparrow^s z \) it's kind of a implicit function from infinite variables. It's super cool, Mphlee, just wait a while. It'll rock ur sox.
