Posts: 1,214
Threads: 126
Joined: Dec 2010
05/24/2021, 02:02 AM
(This post was last modified: 06/09/2021, 02:35 AM by JmsNxn.)
Hey, guys.
I finally decided to properly compile my old research on fractional calculus and hyperoperators. This is almost exactly what I wrote 5 years ago on here; but it's finalized much better. It's done quick and concise. Some people at U of T told me to just rewrite it from scratch and compile everything with a neat little bow. You guys may be interested.
I removed the attachment and placed the arXiv link. I've made some minor edits; I changed some of the details; and I included more references; and tried to flesh out the proofs more.
https://arxiv.org/abs/2106.03935
Posts: 374
Threads: 30
Joined: May 2013
Hi, I like this summary! I was already familiar with the outline of you fraccalc approach from 2015 but now this seems really tidy and easier to follow. Im happy to see those two commutative diagrams! xD
About the logical structure, I just skimmed it... I need to study it line by line. But I find some passages unclear or interesting so I hope you can help me (and some typos).
1 Introdution
p.2; "As a formal sequence, we can call a hyperoperator chain," that object is indeed more general than an Hyperoperation sequence. Regardless of the initial conditions "chain" seems a nice name. There at U of T have you received some comments on that "chain" equation?
2 Fractional derivative
p.3; after the expfixpoint eq.: "to complex values, and [deride]"
p.4; \( S_\theta \) is missing the point 0 right? The area enclosed by the two rays \( t(\cos(\theta)+\sin(\theta)i \) and \( t(\cos(\theta)\sin(\theta)i \) where t>0 and the point 0 removed?
Are arcs assumed to be injective, no winding and do not cross over themselves? Sure, to be precise you consider an arc ad just the subset to integrate over, the image of the parametrization, so that different parametrizations can map [0,+\infty) to the same arc but running on it on different velocities.
p.4; after you define the set of endofuntion boldface \( {\mathbb S}_\theta \) you say: "Then there exists a correspondence between functions F(z)". A correspondence between those bounded F and what?
In other words in that line you are referring to the correspndence between boldface \( {\mathbb S}_\theta \) and boldface \( {\mathbb E}_\theta \) you define at page 6 ?
p.5; let's double check my understanding.
Theorem 2.1 (Euler) takes \( f\in {\mathbb S}_\theta \) and maps it to \( {{\mathfrak E}_w[f]}\in {\mathcal Hol}({{\mathbb C}_{\Re<1}},{\mathbb C}) \), i.e.
where \( \Gamma(z){{\mathfrak E}_w[f]}(z)={\sum_{n=0}^\infty}f^{(n)}(w)\frac{(\gamma(1))^n}{n!(n+z)}+\int_{\gamma[1,\infty)}f(wy)y^{z1}dy \) and \( {{\mathfrak E}[f]}:{{\mathbb C}_{\Re (z)<1}}\to{\mathbb C} \).
Theorem 2.2 (Ramanujan) takes \( H\in {\mathbb E}_\theta \)(?) and maps it to \( {{\mathfrak R}[H]}\in {\mathbb C}^{{\mathbb C}} \), i.e.
where \( {{\mathfrak R}[H]}(w)={\sum_{n=0}^\infty}H(n)\frac{w^n}{n!} \) and \( {{\mathfrak R}[H]}(w):{\mathbb C}\to{\mathbb C} \)
and claims
(a)\( ({\mathfrak E}_0[{\mathfrak R}[H]])(z)=H(z) \) and
(b)\( ({\mathfrak E}_w[{\mathfrak R}[H]])(z)={\mathfrak R}[H\circ S^z](w) \) (S is the successor so S^z(n)=z+n)
Now we know where the trasforms take value but not exactly where they land (can we compose them?):
Thm 2.1 \( {{\mathfrak E}_w:{\mathbb S}_\theta\to {\mathcal Hol}({{\mathbb C}_{\Re<1}},{\mathbb C}) \)
Thm 2.1 \( {{\mathfrak R}:{\mathbb E}_\theta\to{\mathbb C}^{{\mathbb C}} \) and
(a)\( {\mathfrak E}_0\circ {{\mathfrak R}=id \)
Observation: (b) implies trivially (a) as the tail of the series vanish setting w=0 and we keep the leading coefficient H(z). I can't fully parse (b) yet. I can say that (a) is possible iff we can feed Ramanujan into Euler, i.e. \( {{\mathfrak R}:{\mathbb E}_\theta\to{\mathbb S}_\theta \): in fact you prove this later
p.7; By theorem 2.3 \( {{\mathfrak E}_0:{\mathbb S}_\theta\to{\mathbb E}_\theta \). When you say "and conversely" do you mean that also \( {{\mathfrak R}:{\mathbb E}_\theta\to{\mathbb S}_\theta \) holds right?
Why don't you have to prove theorem 2.3 BEFORE you can even claim in theorem 2.2 that you can apply \( {{\mathfrak E}_0} \) to\( f(w)={{\mathfrak R}[H]}(w) \)?
In fact if thm 2.3 is true then we can apply thm 2.2 eq. (a), i.e. \( {\mathfrak E}_0\circ {{\mathfrak R}=id_{{\mathbb E}_\theta} \). This equation alone implies that:
 \( {\mathfrak E}_0 \) is surjective (every function in boldface E is de differintegral at w=0 of some boldface S function);
 \( {\mathfrak R} \) is injective (if two boldface E functions define the same auxilliary function then they are the same function).
But thm 2.4 also add that those two functions are also inverse hence \( {{\mathbb E}_\theta}\simeq {{\mathbb S}_\theta} \) are in bijection
Question 1: for which \( \theta \) those spaces are in bijection?
Question 2: Do this bijection preserve some stucture? Idk... are those functions paces closed under piecewise sum, scalar multiplication, piecewise multiplication, do have a metric or topological structure (a system of open sets), a norm?
Question 3: take \( \theta<\kappa \) we have \( S_\theta\subseteq S_\kappa \). What is the relationship between \( {\mathbb S}_\theta \) and \( {\mathbb S}_\kappa \) o between \( {\mathbb E}_\theta \) and \( {\mathbb E}_\kappa \)?
Asap I'll go on the other sections.
Regards
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
05/25/2021, 10:47 PM
(This post was last modified: 05/25/2021, 10:50 PM by JmsNxn.)
(05/25/2021, 07:32 PM)MphLee Wrote: Question 1: for which \( \theta \) those spaces are in bijection?
Question 2: Do this bijection preserve some stucture? Idk... are those functions paces closed under piecewise sum, scalar multiplication, piecewise multiplication, do have a metric or topological structure (a system of open sets), a norm?
Question 3: take \( \theta<\kappa \) we have \( S_\theta\subseteq S_\kappa \). What is the relationship between \( {\mathbb S}_\theta \) and \( {\mathbb S}_\kappa \) o between \( {\mathbb E}_\theta \) and \( {\mathbb E}_\kappa \)?
Asap I'll go on the other sections.
Regards
Hey, Mphlee, I'll answer these questions to the best of my ability.
The people at U of T called it a hyperoperator chain; that's not my terminology. I know it can be a tad confusing for this forum; but that's what they call it ; so I stuck with the terminology.
You don't need to include zero; but go right ahead and include it. As these functions are presumed to be entire; the integral at zero is always defined. We are only worried about the behaviour as \( w \to \infty \) with \( \arg(w) \le \theta \) to ensure the integral converges. As to what kind of arc; they can self intersect; they can loop; they can do what ever; so long as the initial point is \( 0 \) and the end point \( \infty \) and they are contained in \( S_\theta \). Since these functions are holomorphic, and \( S_\theta \) is simply connected; the integral only depends on the initial point and the end point.
Yes, by correspondence I meant \( \mathbb{E}_\theta \) is virtually the same as \( \mathbb{S}_\theta \); one takes derivatives, the other shifts the variable.
(a) and (b) are exactly as I intend to say it. So yes, your understanding of these seems correct.
I guess your questions after that are about how I order the theorems. I guess it's just personal preference. You can always feed Ramanujan into Euler; that can be done even more generally than how I do it. I'm restricting the cases where you can do this. Because it garners an isomorphic relationship.
1.)
I'm a little confused by your first question; \( \mathbb{E}_\theta \leftrightarrow \mathbb{S}_\theta \) bijectively. And additionally, \( \mathbb{S}_\theta \subset \mathbb{S}_\kappa \) for \( \theta < \kappa \); just as well with \( \mathbb{E}_\theta \subset \mathbb{E}_\kappa \). They are in bijection only for the same \( \theta \); other wise its a different kind of map.
2.)
This is a good question, that has a pretty deep answer. First of all \( f,g \in \mathbb{S}_\theta \) implies that \( f+g \in \mathbb{S}_\theta \) and \( F , G \in \mathbb{E}_\theta \) then \( F+G \in \mathbb{E}_\theta \); so this is a linear isomorphism. It's actually a linear isomorphism between hilbert spaces; but it's a little difficult to do this exactly. This would mean there is a norm; and there even is an inner product; but it's spurious to this paper. Id have to dust off my copy of Linear Operators on Hilbert Spaces to remind myself what exactly these are; can't recall off the top of my head.
Now, \( \int_\gamma f(y)g(y) \,dy < \infty \); which happens for all \( f,g \in \mathbb{S}_\theta \); and therefore if \( f,g \in \mathbb{S}_\theta \) then \( f \cdot g \in \mathbb{S}_\theta \). As to what happens when you apply the mapping to the product; you get a binomial convolution.
\(
\frac{d^{z}}{dw^{z}} f(w)g(w) = \sum_{k=0}^\infty \binom{z}{k} f^{(k)}(w) \frac{d^{zk}}{dw^{zk}} g(w) = H(z)\\
\)
I didn't prove this in this paper; and this result is not mine. It's commonly known as the binomial theorem (I think?); you can find it in any text book on fractional calculus; it's usually one of the first things you prove. It's a little difficult; but in the best scenarios I can prove it pretty quickly because;
\(
H(n) = \sum_{k=0}^n\binom{n}{k} f^{(k)}(w) g^{(nk)}(w) = \frac{d^{n}}{dw^{n}} f(w)g(w) \\
\)
So if you can show \( H \in \mathbb{E}_\theta \); they're equivalent by The Identity Theorem you get using Ramanujan's master theorem. This depends on how well \( g \) or \( f \) behave however. This convolution won't work generally for all \( f,g \) because \( \frac{d^{zk}}{dw^{zk}} g(w) \) may not exist.
You can then, write this as a convolution,
\(
\frac{d^{z}}{dw^{z}}_{w=0} f(w)g(w) = F * G\\
\)
Where sometimes this has the above representation; not always though. What you always get though; which again, isn't in the paper; is the indefinite sum representation.
\(
F * G = \sum_{j=0}^z \binom{z}{j}F(j)G(zj)\\
\)
This representation was more carefully studied in the indefinite sum paper on my ariv that's referenced in this paper. Though I use a slightly less direct isomorphism (forgive me, I wrote that paper a long time ago; but it still gets the job done).
Going in the other direction is more difficult. Recall that \( F \in \mathbb{E}_\theta \) implies that \( F(z) \le C e^{(\pi/2  \theta)\Im(z)} \) as \( \Im(z) \to \pm \infty \). So this means, if \( F \in \mathbb{E}_\theta \) and \( G \in \mathb{E}_\kappa \) then \( F(z)G(z) \le Me^{(\pi  \theta\kappa)\Im(z)} \); which may or may not belong to an \( \mathbb{E}_\tau \) depending on what \( \theta \) and \( \kappa \) are. If they do belong to one then when you put it in the space \( \mathbb{S}_\tau \); then,
\(
h(w) = \sum_{n=0}^\infty F(n)G(n) \frac{w^n}{n!}\\
\)
3.
As to the relationship between varying \( \theta \) and \( \kappa \); the best I have is that, the maximal sector in which \( f \) converges \( S_\theta \), is the maximal set \( \mathbb{S}_\theta \) it belongs to. And additionally; the maximal set \( F \) belongs in is \( \mathbb{E}_\theta \). And the maximal value \( \theta \) in which \( \Gamma(z)F(z) \le Ce^{\theta Im(z)} \) is the maximal set \( F \in \mathbb{E}_\theta \). I'm not sure what else you could be asking here..? Am I missing something?
Regards, James
Posts: 374
Threads: 30
Joined: May 2013
05/26/2021, 12:01 AM
(This post was last modified: 05/26/2021, 12:58 AM by MphLee.)
(05/25/2021, 10:47 PM)JmsNxn Wrote: The people at U of T called it a hyperoperator chain; that's not my terminology. I know it can be a tad confusing for this forum; but that's what they call it ; so I stuck with the terminology. Yeah, but then I'm curious... how the hell did they know about hyperoperations? I'm pretty sure that serious mathematicians never talk about hyperoperations and the term hyperoperations is very niche and already used by hyperstructure theory (theory of groups with multivalued operation).
So if you tell me that they used that terminology for a reason... I'm pretty excited to hear more about that.
You know... 2015... 6 years googling things and the only persons that write that chain equation are you, Rubtsov and Romerio, 3/4 Tetration Forum's users and myself.
Quote:I guess your questions after that are about how I order the theorems. I guess it's just personal preference. You can always feed Ramanujan into Euler; that can be done even more generally than how I do it.
Mhh idk, I'll study this better. I had the impression that to ensure you could apply Euler to f you had to show FIRST that f=R[H] was in boldface E. I'm sure I have to read and understand better all those conditions (and probably go back to your old papers).
Quote:1.)
I'm a little confused by your first question;
I apologize... I'm sure I miss something crucial about convergence but I was thinking the following
I was asking if for EVERY \( \theta \in [0,\pi] \) we have \( \mathbb{E}_\theta \simeq \mathbb{S}_\theta \)
The existence of that chain of inclusions is interesting... It should mean that we can extend \( {\mathfrak E}_w \) and \( {\mathfrak R} \) to a bigger domain. To be clear observe that if the origin of the complex plane is included and \( \theta \le \kappa \) implies \( S_\theta \subseteq S_\kappa \) then \( \displaystyle\bigcup_{\theta\in[0,\pi)}S_\theta ={\mathbb C}/(\infty,0) \) and \( S_\pi=\displaystyle\bigcup_{\theta\in[0,\pi]}S_\theta ={\mathbb C} \)
From the monotone chain of inclusion also follows that for every \( \theta <\pi \) we have \( \mathbb{E}_\theta \subset \mathbb{E}_\pi \) and \( \mathbb{S}_\theta \subset \mathbb{S}_\pi \)
So you can't possibly mean that every theta is ok... maybe only for \( \theta \in [0,\pi) \)?
So the idea is the following.... if \( \theta \le \kappa \) consider the two functions \( {\mathfrak E}^\theta_w:\mathbb{S}_\theta\to \mathbb{E}_\theta \) and \( {\mathfrak E}^\kappa_w:\mathbb{S}_\kappa\to \mathbb{E}_\kappa \) do we have that restricting \( {\mathfrak E}^\kappa_w \) to \( \mathbb{S}_\theta \) give us \( {\mathfrak E}^\theta_w \)?
In symbols \( {\mathfrak E}^\kappa_w_{{\mathbb E}_\theta}={\mathfrak E}^\theta_w \)
Diagrammatically \( \mathbb{S}_\theta\overset{{\mathfrak E}^\theta_w}{\longrightarrow} \mathbb{E}_\theta\overset{\subseteq}{\longrightarrow} \mathbb{E}_\kappa \) is the same as \( \mathbb{S}_\theta \overset{\subseteq}{\longrightarrow} \mathbb{S}_\kappa\overset{{\mathfrak E}^\kappa_w}{\longrightarrow} \mathbb{E}_\kappa \)
If this condition works we can just work with spaces \( {\mathbb S}:=\displaystyle\bigcup_{\theta\in[0,\pi)}{\mathbb S}_\theta \) and \( {\mathbb E}:=\displaystyle\bigcup_{\theta\in[0,\pi)}{\mathbb E}_\theta \) because evey function in there satisfies your criterion for some \( \theta \), by definition.
Quote:2.)
This is a good question, that has a pretty deep answer.
[...]
\(
h(w) = \sum_{n=0}^\infty F(n)G(n) \frac{w^n}{n!}\\
\)
Woooa... that has to be important. I have some gut feeling that this is very important...
I'll keep it for myself now.... but I guess I have seen this somewhere before...
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
(05/26/2021, 12:01 AM)MphLee Wrote: Yeah, but then I'm curious... how the hell did they know about hyperoperations? I'm pretty sure that serious mathematicians never talk about hyperoperations and the term hyperoperations is very niche and already used by hyperstructure theory (theory of groups with multivalued operation).
So if you tell me that they used that terminology for a reason... I'm pretty excited to hear more about that.
You know... 2015... 6 years googling things and the only persons that write that chain equation are you, Rubtsov and Romerio, 3/4 Tetration Forum's users and myself.
Quote:I guess your questions after that are about how I order the theorems. I guess it's just personal preference. You can always feed Ramanujan into Euler; that can be done even more generally than how I do it.
Mhh idk, I'll study this better. I had the impression that to ensure you could apply Euler to f you had to show FIRST that f=R[H] was in boldface E. I'm sure I have to read and understand better all those conditions (and probably go back to your old papers).
Quote:1.)
I'm a little confused by your first question;
I apologize... I'm sure I miss something crucial about convergence but I was thinking the following
I was asking if for EVERY \( \theta \in [0,\pi] \) we have \( \mathbb{E}_\theta \simeq \mathbb{S}_\theta \)
The existence of that chain of inclusions is interesting... It should mean that we can extend \( {\mathfrak E}_w \) and \( {\mathfrak R} \) to a bigger domain. To be clear observe that if the origin of the complex plane is included and \( \theta \le \kappa \) implies \( S_\theta \subseteq S_\kappa \) then \( \displaystyle\bigcup_{\theta\in[0,\pi)}S_\theta ={\mathbb C}/(\infty,0) \) and \( S_\pi=\displaystyle\bigcup_{\theta\in[0,\pi]}S_\theta ={\mathbb C} \)
From the monotone chain of inclusion also follows that for every \( \theta <\pi \) we have \( \mathbb{E}_\theta \subset \mathbb{E}_\pi \) and \( \mathbb{S}_\theta \subset \mathbb{S}_\pi \)
So you can't possibly mean that every theta is ok... maybe only for \( \theta \in [0,\pi) \)?
So the idea is the following.... if \( \theta \le \kappa \) consider the two functions \( {\mathfrak E}^\theta_w:\mathbb{S}_\theta\to \mathbb{E}_\theta \) and \( {\mathfrak E}^\kappa_w:\mathbb{S}_\kappa\to \mathbb{E}_\kappa \) do we have that restricting \( {\mathfrak E}^\kappa_w \) to \( \mathbb{S}_\theta \) give us \( {\mathfrak E}^\theta_w \)?
In symbols \( {\mathfrak E}^\kappa_w_{{\mathbb E}_\theta}={\mathfrak E}^\theta_w \)
Diagrammatically \( \mathbb{S}_\theta\overset{{\mathfrak E}^\theta_w}{\longrightarrow} \mathbb{E}_\theta\overset{\subseteq}{\longrightarrow} \mathbb{E}_\kappa \) is the same as \( \mathbb{S}_\theta \overset{\subseteq}{\longrightarrow} \mathbb{S}_\kappa\overset{{\mathfrak E}^\kappa_w}{\longrightarrow} \mathbb{E}_\kappa \)
If this condition works we can just work with spaces \( {\mathbb S}:=\displaystyle\bigcup_{\theta\in[0,\pi)}{\mathbb S}_\theta \) and \( {\mathbb E}:=\displaystyle\bigcup_{\theta\in[0,\pi)}{\mathbb E}_\theta \) because evey function in there satisfies your criterion for some \( \theta \), by definition.
Quote:2.)
This is a good question, that has a pretty deep answer.
[...]
\(
h(w) = \sum_{n=0}^\infty F(n)G(n) \frac{w^n}{n!}\\
\)
Woooa... that has to be important. I have some gut feeling that this is very important...
I'll keep it for myself now.... but I guess I have seen this somewhere before...
As to your first point. I shared a lot of work at U of T; and they started calling these things hyperoperation chains (at least the people I talked to). There isn't anything published as of yet; but they've done quite a few things similarly to me. They never published, I presume because I have priority over these fractional calculus things; at least from their perspective. I kind of left the scene for a while; and they were upset I never published half the things they sort of knew about through me. It's why I've started publishing all over again; sort of like a code dump of everything I've done. Largely because some professors told me to.
They did do some stuff with,
\(
\alpha \uparrow^s z\\
\)
But I rarely see them (especially with covid right now); and I presume it's slow going. But they seemed confident my original formula for it 5 or 6 years ago is the correct one (but my original proof is incorrect):
\(
\alpha \uparrow^s z = \frac{d^{s1}}{dw^{s1}} \frac{d^{z1}}{du^{z1}} _{w=0}_{u=0} \sum_{n=0}^\infty \sum_{k=0}^\infty \alpha \uparrow^{n+1}(k+1) \frac{w^nu^k}{n!k!}\\
\)
They were also the ones who encouraged me to publish all this Infinite composition stuff. They were pretty shocked when I explained the residual theorem to them (just like you were ).
As to your second point; I would absolutely avoid talking about \( \theta \in (\pi/2,\pi) \) (the value \( \theta = \pi \) is out of the question too; because then it's bounded on \( \mathbb{C} \) and it's just constant). If you want to include sectors of this length; things can get a bit more complicated. In fact; it's a good amount trickier in these cases. So, I only play with \( \theta \in (0,\pi/2] \). Not that you can't use these cases; but if my memory serves me correct; the functional properties change a fair amount.
But yes, for every \( \theta \in (0,\pi) \) we have the correspondence \( \mathbb{E}_\theta \simeq \mathbb{S}_\theta \).
We absolutely have the restriction you are asking. Yes, we can view these as operators acting on restricted spaces and they're equivalent. I mean,
\( F \in \mathbb{E}_\theta \)
\(
f(w) = \sum_{n=0}^\infty F(n) \frac{w^n}{n!}\\
\)
Has no dependency on \( \theta \) so the restriction is arbitrary.
Additionally;
\( f \in \mathbb{S}_\theta \)
Then,
\(
\Gamma(z) F(z) = \int_0^\infty f(y)y^{z1}\,dy\\
\)
Which again, has no mention of \( \theta \). The variable only appears to describe the asymptotics of \( f \) and \( F \). Where for \( f \) it determines the size of the sector of its convergence. And for \( F \) it describes its possible growth type as \( \Im(z) \to \pm \infty \).
And yes, you can absolutely work with,
\(
\mathbb{E} = \bigcup_{\theta \in (0,\pi)} \mathbb{E}_\theta
\)
As I usually only care about \( \theta = \pi/2 \) or \( \theta < \pi/2 \); I don't pay much mind of that. Remember though, we do not want \( \theta = 0 \). This is no good. The most important part is that we have an open sector; when \( \theta = 0 \) we just have a line; and this will produce anomalies. Particularly; it'll screw things up when you want to make the correspondence; because you cannot really "pull out" any asymptotic data.
What you are doing with this union is much more similar to the classical treatment of Ramanujan's Master Theorem. I don't like this treatment; largely because it avoids explicitly stating how the differintegrated function is bounded. And it's very helpful to know how its bounded. If we consider this union, we're not being explicit about where the integral converges; and we're stuck only with the absolute knowledge that,
\(
\Gamma(z)F(z) = \int_0^\infty f(y)y^{z1}\,dy\\
\)
And for some sector it works. This will produce problems when you want to do more advanced things in functional analysis with these things (which is moreso needed for the function \( \alpha \uparrow^s z \), or for defining a convolution, or for introducing the indefinite sum). But yes, you are correct.
As to your last point, I could never find any use for this thing. A long time ago I used to try and try to create a convergence factor. So that, for arbitrary \( F \), not necessarily in \( \mathbb{E} \), there exists some \( G \in \mathbb{E} \) such that \( F\cdot G \in \mathbb{E} \). Then we would get,
\(
\Gamma(z)F(z) = \frac{1}{G(z)}\int_0^\infty h(y)y^{z1}\,dy\\
\)
But I could never find anything meaningful...
I sort of settled that there's no way to force a function to be in \( \mathbb{E} \); it just is or it isn't.
Regards, James
Posts: 374
Threads: 30
Joined: May 2013
05/26/2021, 10:50 AM
(This post was last modified: 05/26/2021, 11:52 AM by MphLee.)
(05/26/2021, 02:34 AM)JmsNxn Wrote: As to your first point. I shared a lot of work at U of T; and they started calling these things hyperoperation chains (at least the people I talked to). There isn't anything published as of yet; but they've done quite a few things similarly to me. They never published, I presume because I have priority over these fractional calculus things; at least from their perspective. I kind of left the scene for a while; and they were upset I never published half the things they sort of knew about through me. It's why I've started publishing all over again; sort of like a code dump of everything I've done. Largely because some professors told me to.
They did do some stuff with,
\(
\alpha \uparrow^s z\\
\)
But I rarely see them (especially with covid right now); and I presume it's slow going. But they seemed confident my original formula for it 5 or 6 years ago is the correct one (but my original proof is incorrect):
\(
\alpha \uparrow^s z = \frac{d^{s1}}{dw^{s1}} \frac{d^{z1}}{du^{z1}} _{w=0}_{u=0} \sum_{n=0}^\infty \sum_{k=0}^\infty \alpha \uparrow^{n+1}(k+1) \frac{w^nu^k}{n!k!}\\
\)
They were also the ones who encouraged me to publish all this Infinite composition stuff. They were pretty shocked when I explained the residual theorem to them (just like you were ).
Thank you, now I understand all the business with the theta angles.
About you peers at U of T, it is remarkable if they found that interesting and surprising that " they've done quite a few things similarly to me". I never had many chances to talk with mathematicians... and all the hints tell me that this topic is completely unknown and irrelevant. Obviously, as you can imagine from my last draft and recent threads, I can already, at least partially, trace back this topic to the backbone of mathematics I already begin to see that it touches many mainstream topics.
What do you think about the centrality of hyperoperations chainlike objects and their possible continuous extension to nondiscrete chains (paths?)?
How do you think the reception of these ideas was and what was the atmosphere around those chain objects? Did they treat them as exotic and niche items?
Addendum
I red the last sections of the paper. I really can't find obscure points. All the machinery lies in the first two theorems. I'll give myself time to digest it.
Thank you!
Best regards!
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
(05/26/2021, 10:50 AM)MphLee Wrote: What do you think about the centrality of hyperoperations chainlike objects and their possible continuous extension to nondiscrete chains (paths?)?
How do you think the reception of these ideas was and what was the atmosphere around those chain objects? Did they treat them as exotic and niche items?
Honestly, their reaction was mostly; that this is some really wacky and weird stuff. But they thought it was cool. They were more interested in taking matrices and arbitrary operators and doing things like this:
\(
\frac{d^{z}}{dw^z} e^{Aw} = A^z e^{Aw}\\
\)
Honestly, I don't see what's so cool about that; a fractional power of matrix seems easy to do; but apparently they like that go figure.
They were more receptive than you may think. A lot of them would already have brushed on these things; especially comp sci people. They're more of the boat, that this looks way too hard, there's no way we could ever do that; than, it's unimportant or niche. They especially like the \( \Gamma \) which pops up everywhere, lol.
Honestly; I still have no idea how to construct a function like \( \alpha \uparrow^s z \); and it's not so much that you have to prove the thing converges; it's the domain arguments needed to show the functional equation that are a real problem. I gave up a long time ago trying to make that work. But I still believe it to be a very important subject.
Sincere Regards, James
Posts: 374
Threads: 30
Joined: May 2013
05/27/2021, 08:00 PM
(This post was last modified: 05/28/2021, 04:05 PM by MphLee.)
(05/26/2021, 11:17 PM)JmsNxx Wrote: \(
\frac{d^{z}}{dw^z} e^{Aw} = A^z e^{Aw}\\
\)
Honestly, I don't see what's so cool about that; a fractional power of matrix seems easy to do; but apparently they like that go figure. I guess I can understand why. Iterating matrices is the key to extend every linear process. Btw... Abel and Schroeder iterate by achieving a linearization of a nonlinear dynamics so it is understandable.
But to me it is too narrow. Just this summer, when I wrote a short paper in Italian (*) where I define the continuous extension of the Fibonacci sequence only via linear algebra and Eigentheory. The method is pretty standard and it gives (pag 8 ) the usual analytic closed form of Fibonacci. The interesting thing is that I did it from scratch starting from the formal definition of recursion in recursion theory.
Thanks to that paper I was able to fully appreciate that Fibonacci is defined by a kind of recursion that we could call linear and that linear recursion can be translated to "applying a matrix": in other words a recursion that IS NOT iteration can be translated into exponentiation of a matrix.
(*) It's short but just look at the formulas just to get a taste. One day I may translate it.
(2020 07 30 3) Successioni ricorsive ed autoteoria.pdf (Size: 421.82 KB / Downloads: 336)
Quote:They especially like the \( \Gamma \) which pops up everywhere, lol.
The gamma popping up everywhere there is curious... but idk if it is an artifact of the method or it is structural in some sense. What do you think? What I know is that long time ago I was shown a graph of a linear or cubic approximation of tetration plotted on real arguments and showing the real part and the imaginary part. Before the singularity at 2 the imaginary part did look a lot like gamma function...
Quote:I gave up a long time ago trying to make that work. But I still believe it to be a very important subject.
Btw, it is a very hard object, I'm not surprised that you were not able to make it work. I strongly believe that there is some hidden structure, some hidden regularities to be discovered and functional identities on the ranks have to be found before we could "declare war on the sky". One reason for my believe is the following: we have yet to discover the intrinsic nature of abstract iteration and that is just level 1 of rank theory. Ranks theory is applying abstract iteration to abstract iteration itself. But this could sound empty to many ears.
There is another good reason to expect extraordinary obstacles. Let me illustrate this as a story made up of four layers/moments.
Quote:At the beginning there's nothing, no difference. We have to chose a point and make the first distinction.
0
let rank 0 be conceptually our base function, our unit of mesaure of linearity (the +1). It is a single point.
1
Then rank 1 is conceptually the totality of our way of traslating things (or iterates) and we should think of it as our base number system and our base geometry. So we've built numbers out of a unit. A kind of geometric object, a "line".
2
At this point the automorphism of our geometric object (the modes of interacting with itself) are the rank 2 functions. We can think of rank 2 as the arithmetic or as the scaling operations over our base geometry/number system.
3
So we have now rank 1 (the geometric level) and on top of that we have built a new layer, rank 2 (the arithmetic multiplicative level). The first is made of lines and linear traslation, the second of scaling and rotations. The link between traslating and rotating is... yes exponentiation.
So morphisms from rank 1 to rank 2 give us the world of rank 3.
This seems a kind of metaphysical theogony. An ontogenesis that goes from the nothing to complexity.
I hope you can clearly see that there is something very very deep lurking here, something that "just interpolating" (even analytically) can't solve. I see this as an obstacle to a real noninteger extension because here we have to first generalize a chain of phenomena of which only the first three account for 60/70% of all the existing mathematics.
Sincere regards,
V.C.
Edit: let's provide some beef in addition to the juicy smoke.
At the beginning we have a bunch of composable functions \( (G,\circ, id_G) \) with the usual sets \( [f,g]_G:=\{x\in G\,\, xf=gx\} \)
0 Fix an "unit" element \( s\in G \). Define the subset \( {\mathcal E}^0_s\subseteq G \) as \( {\mathcal E}^0_s:=\{s\} \)
1 Define the subgroup \( {\mathcal E}^1_s\subseteq G \) as \( {\mathcal E}^1_s:=[s,s]_G \). Clearly \( s^n\in\mathcal E^1_s \) for every n.
2 Define the set \( {\mathcal E}^2_s\subseteq G \) as \( {\mathcal E}^2_s:=[s,{\mathcal E}^1_s]_G \). Clearly in some cases there exists a \( \mu_n\in{\mathcal E}^2_s \) s.t. \( \mu_n s=s^n\mu_n \). Clearly \( \mu_n \) is a multiplicationlike function.
3 Define the set \( {\mathcal E}^3_s\subseteq G \) as \( {\mathcal E}^3_s:=[s,{\mathcal E}^2_s]_G \). Clearly in some cases there exists a \( \varepsilon_n\in{\mathcal E}^3_s \) s.t. \( \varepsilon_n s=\mu_n\varepsilon_n \). Clearly \( \varepsilon_n \) is an exponentiationlike function.
We define \( {\mathcal E}^{\sigma+1}_s:=[s,{\mathcal E}^\sigma_s]_G \) and we trivially have \( {\mathcal E}^{\sigma}_s\subseteq {\mathcal E}^{\sigma+1}_s \)
The union \( \displaystyle \bigcup_{\sigma=0}^\infty{\mathcal E}^{\sigma}_s \) can be seen as the class of primitive recursive element relative to s.
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 1,214
Threads: 126
Joined: Dec 2010
05/28/2021, 07:43 PM
(This post was last modified: 05/28/2021, 10:33 PM by JmsNxn.)
Heh, I'm surprised how well I understood your italian, lol. I guess the years of french schooling paid off; gotta love the ubiquity of romance languages. That was an interesting read. It seems like a very good introduction to what you are trying to do.
Posts: 374
Threads: 30
Joined: May 2013
Ah! I saw you dropped here and there some french words in your papers so I tried to share it (there are large Italian communities in Canada).
Btw I'm not sure it is good as an intro to hyperoperations. But it could be the first step for the unification of Gottfried's matrix methods and the compositional methods for iteration.
The paper arose initially as an attempt to answer to real analysis question a friend asked me, a challenge. He asked me when a fibonacci sequence defined with different initial values was divergent or convergent and for which initial values we have convergence. In that paper I show that a generalized fibonacci sequence converges only if the initial condtition lies in the eigenspace associated with the silver ratio (golden and silver ratio are the two eigenvalues of the "fibonacci matrix").
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
