(05/26/2021, 11:17 PM)JmsNxx Wrote: \(I guess I can understand why. Iterating matrices is the key to extend every linear process. Btw... Abel and Schroeder iterate by achieving a linearization of a non-linear dynamics so it is understandable.
\frac{d^{z}}{dw^z} e^{Aw} = A^z e^{Aw}\\
\)
Honestly, I don't see what's so cool about that; a fractional power of matrix seems easy to do; but apparently they like thatgo figure.
But to me it is too narrow. Just this summer, when I wrote a short paper in Italian(*) where I define the continuous extension of the Fibonacci sequence only via linear algebra and Eigen-theory. The method is pretty standard and it gives (pag 8 ) the usual analytic closed form of Fibonacci. The interesting thing is that I did it from scratch starting from the formal definition of recursion in recursion theory.
Thanks to that paper I was able to fully appreciate that Fibonacci is defined by a kind of recursion that we could call linear and that linear recursion can be translated to "applying a matrix": in other words a recursion that IS NOT iteration can be translated into exponentiation of a matrix.
(*) It's short but just look at the formulas just to get a taste. One day I may translate it.
(2020 07 30 3) Successioni ricorsive ed autoteoria.pdf (Size: 421.82 KB / Downloads: 849)
Quote:They especially like the \( \Gamma \) which pops up everywhere, lol.
The gamma popping up everywhere there is curious... but idk if it is an artifact of the method or it is structural in some sense. What do you think? What I know is that long time ago I was shown a graph of a linear or cubic approximation of tetration plotted on real arguments and showing the real part and the imaginary part. Before the singularity at -2 the imaginary part did look a lot like gamma function...
Quote:I gave up a long time ago trying to make that work. But I still believe it to be a very important subject.
Btw, it is a very hard object, I'm not surprised that you were not able to make it work. I strongly believe that there is some hidden structure, some hidden regularities to be discovered and functional identities on the ranks have to be found before we could "declare war on the sky". One reason for my believe is the following: we have yet to discover the intrinsic nature of abstract iteration and that is just level 1 of rank theory. Ranks theory is applying abstract iteration to abstract iteration itself. But this could sound empty to many ears.
There is another good reason to expect extraordinary obstacles. Let me illustrate this as a story made up of four layers/moments.
Quote:At the beginning there's nothing, no difference. We have to chose a point and make the first distinction.
0
let rank 0 be conceptually our base function, our unit of mesaure of linearity (the +1). It is a single point.
1
Then rank 1 is conceptually the totality of our way of traslating things (or iterates) and we should think of it as our base number system and our base geometry. So we've built numbers out of a unit. A kind of geometric object, a "line".
2
At this point the automorphism of our geometric object (the modes of interacting with itself) are the rank 2 functions. We can think of rank 2 as the arithmetic or as the scaling operations over our base geometry/number system.
3
So we have now rank 1 (the geometric level) and on top of that we have built a new layer, rank 2 (the arithmetic multiplicative level). The first is made of lines and linear traslation, the second of scaling and rotations. The link between traslating and rotating is... yes exponentiation.
So morphisms from rank 1 to rank 2 give us the world of rank 3.
This seems a kind of metaphysical theogony. An ontogenesis that goes from the nothing to complexity.
I hope you can clearly see that there is something very very deep lurking here, something that "just interpolating" (even analytically) can't solve. I see this as an obstacle to a real non-integer extension because here we have to first generalize a chain of phenomena of which only the first three account for 60/70% of all the existing mathematics.
Sincere regards,
V.C.
Edit: let's provide some beef in addition to the juicy smoke.
At the beginning we have a bunch of composable functions \( (G,\circ, id_G) \) with the usual sets \( [f,g]_G:=\{x\in G\,|\, xf=gx\} \)
0 Fix an "unit" element \( s\in G \). Define the subset \( {\mathcal E}^0_s\subseteq G \) as \( {\mathcal E}^0_s:=\{s\} \)
1 Define the subgroup \( {\mathcal E}^1_s\subseteq G \) as \( {\mathcal E}^1_s:=[s,s]_G \). Clearly \( s^n\in\mathcal E^1_s \) for every n.
2 Define the set \( {\mathcal E}^2_s\subseteq G \) as \( {\mathcal E}^2_s:=[s,{\mathcal E}^1_s]_G \). Clearly in some cases there exists a \( \mu_n\in{\mathcal E}^2_s \) s.t. \( \mu_n s=s^n\mu_n \). Clearly \( \mu_n \) is a multiplication-like function.
3 Define the set \( {\mathcal E}^3_s\subseteq G \) as \( {\mathcal E}^3_s:=[s,{\mathcal E}^2_s]_G \). Clearly in some cases there exists a \( \varepsilon_n\in{\mathcal E}^3_s \) s.t. \( \varepsilon_n s=\mu_n\varepsilon_n \). Clearly \( \varepsilon_n \) is an exponentiation-like function.
We define \( {\mathcal E}^{\sigma+1}_s:=[s,{\mathcal E}^\sigma_s]_G \) and we trivially have \( {\mathcal E}^{\sigma}_s\subseteq {\mathcal E}^{\sigma+1}_s \)
The union \( \displaystyle \bigcup_{\sigma=0}^\infty{\mathcal E}^{\sigma}_s \) can be seen as the class of primitive recursive element relative to s.
Mother Law \(\sigma^+\circ 0=\sigma \circ \sigma^+ \)
\({\rm Grp}_{\rm pt} ({\rm RK}J,G)\cong \mathbb N{\rm Set}_{\rm pt} (J, \Sigma^G)\)

go figure.