Bridging fractional iteration and fractional calculus
#1
Shouldn't there be a bridge between fractional iteration and fractional calculus? Consider my work on fractional iteration that derives the fractional iteration of a function from a fixed point. What if abstractly the function being iterated is the differential function and the fixed point is the exponential function?
Daniel
Reply
#2
but f'(x) = f(f(x)) is a problematic equation for x near a fixpoint.

Say the fixpoint is zero.

take the truncated taylor series for an infinitesimal h that vanishes at h^n = 0.

then 

f(h) = 0 + a h + b h^2 + ... + 0.

f'(h) =/= f(f(h))
not even close.

So we already have issues with integer iterations and integer derivatives.

Similarly the carleman matrix A(f) does not satisfy

D^n A(f) = A(f)^n

Or take an example without fixpoints :

t(x) = x + 1.

the derivatives are 0 eventually.

while the iterations are not.

regards

tommy1729
Reply
#3
even if f ' (x) = f(f(x)) then f ' ' (x) = f(f(f(x))) does not hold.

For nonlinear f.

regards

tommy1729
Reply
#4
James will probably disagree with me :p
Reply
#5
im still thinking though .. just my first ideas.
Reply
#6
I actually don't disagree with you, Tommy!

We have to apply the fractional derivative/integral to an AUXILIARY function; not the function itself. So if I write:

\[
\vartheta(w,\xi) = \sum_{n=0}^\infty f^{\circ n}(\xi) \frac{w^n}{n!}\\
\]

Then the fractional derivative:

\[
\frac{d^{s}}{dw^s} \Big{|}_{w=0} \vartheta(w,\xi) = f^{\circ s}(\xi)\\
\]

This converges to the standard regular iteration (schroder iteration for geometric fixed points; and abel iteration for neutral fixed points). The trouble with this method, is that it's very non-trivial to show that this integral transform converges.

ALSO! It's very important to note; that we must use the exponential differintegral (Or the Riemann-Liouville differintegral); because this is the sole differintegral that satisfies:

\[
\frac{d^s}{dw^s} e^{\lambda w} = \lambda^s e^{\lambda w}\\
\]

Which is the differintegral:

\[
\frac{d^s}{dw^s} f(w) = \frac{1}{\Gamma(-s)} \int_0^\infty f(w-y)y^{-s-1}\,dy\\
\]

Here the integral is taken along SOME PATH \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\). This definition is not path independent.

Which in our language is: \(f(\xi) = \lambda \xi\) then:

\[
\vartheta(w,\xi) = e^{\lambda w} \xi = \sum_{n=0}^\infty f^{\circ n}(\xi) \frac{w^n}{n!}\\
\]

And:

\[
\frac{d^{s}}{dw^s} \Big{|}_{w=0} \vartheta(w,\xi) = \lambda^s \xi = f^{\circ s}(\xi)\\
\]


This is pretty fucking trivial here; but if we allow for more advanced \(f\) (and not dilations); the same result happens. I've proven it in as general a manner as I could. There are still edge cases I'm not certain on; but I am certain they "converge in some manner". For example take:

\[
\vartheta(w,\xi) = \sum_{n=0}^\infty \sin^{\circ n +1}(\xi) \frac{w^n}{n!}\\
\]

Then:

\[
\frac{d^s}{dw^s}\Big{|}_{w=0} \vartheta(w,\xi) = \sin^{\circ 1+s}(\xi)\\
\]

But this is only true for \(\xi \in \mathbb{R}\). It's nowhere holomorphic at \(\xi = 0\)--where we get an asymptotic series. And outside of this area; I'm still not sure what happens--but it looks like it does converge.

If \(0<|f'(\xi_0)|<1\) then this result is "generally true"--it gets really tricky. For example; let:

\[
f(\xi) = -\frac{\xi}{2} + \xi^2\\
\]

Then we have to take:

\[
\vartheta(w,\xi) = \sum_{n=0}^\infty f^{\circ 2n}(\xi) \frac{w^n}{n!}\\
\]

Then:

\[
\frac{d^s}{dw^s}\Big{|}_{w=0} \vartheta(w,\xi) = \left(f^{\circ 2}\right)^{\circ s}(\xi)\\
\]

This is not the iteration, but it's close enough you can recover the iteration. We have to take into account the period of \(-1\). The function \(e^{\pi i s}\) has period \(2\).

\[
\frac{d^{s/2+ \pi i s}}{dw^{s/2 + \pi i s}}\Big{|}_{w=0} \vartheta(w,\xi) = f^{\circ s}(\xi)\\
\]

I may have made a typo here( hope to god not); but this creates a fractional iteration that is exactly the Schroder iteration. It's actually pretty easy to identify too. A lot of my notes are scattered on the subject, but I do have a few papers on my arxiv dealing with specific instances. I never found a "global theory" that worked; so I abandoned a lot of the work. It ended up being a case by case kind of theory; and that's ugly as fuck; so I moved away from it.

There's a lot, and I fucking mean a lot, of similarities between the exponential differintegral and local iteration (or regular iteration).

Regards, James
Reply
#7
I thought I'd add here a relationship between "iterated matrices" and "iterated derivatives". This is a secondary thought to most of my work on fractional calculus; but it's masterful as a bridge between these operations. I am going to refer to this as Ramanujan; and "little circle method" mathematicians would look at it.

Let's let the Matrix \(A\) be a non-singular matrix; so that \(A^{-1}\) exists. To be simple; let's assume that \(A : \mathbb{C}^n \to \mathbb{C}^n\). And let's write that:

\[
e^{Ax} = \sum_{n=0}^\infty A^n \frac{x^n}{n!}\\
\]

We can safely assume that \(e^{Ax} : \mathbb{C}^n \to \mathbb{C}^n\). Where \(x\) produces a semi-group structure. From here, we can write:

\[
\frac{d^s}{dx^s} e^{Ax} = A^s e^{Ax}\\
\]

Where this is a linear operator applying \(\mathbb{C}^n \to \mathbb{C}^n\). We can set \(x=0\); and I'm just rewriting Ramanujan's master theorem as he wrote it:

\[
\Gamma(s) A^{-s} = \int_0^\infty e^{-Ax}x^{s-1}\,dx\\
\]

And we've fractionally iterated the matrix \(A: \mathbb{C}^n \to \mathbb{C}^n\). I avoided a lot of "singular moments" here. But if we can map the matrix \(A\) well enough; this discussion is entirely rigorous. It relates Daniel's work; Sheldon's work; Bo's work; Tommy's work; and all that matrix shit in a quantum physics./hilbert space shit.

Fractional calculus is just:

\[
\frac{d^s}{dA^s} : \mathbb{C}_{\Re(s) > 0} \times \mathcal{H} \to \mathcal{H}\\
\]

Where

\[
\mathcal{H} = \{ A : \mathbb{C}^n \to \mathbb{C}^n\,|\, A^{-1} :\mathbb{C}^n \to \mathbb{C}^n\}\\
\]

Then:

\[
\frac{d^s}{dA^s} e^{Ax} = x^s e^{Ax}
\]

And we can differentiate across \(A\) or \(x\); and still have the same rules.

This is the language I think in; and it's just a translation of much of the standard "tetration forum" language; and current literature.
Reply
#8
(03/29/2023, 07:35 AM)JmsNxn Wrote: I thought I'd add here a relationship between "iterated matrices" and "iterated derivatives". This is a secondary thought to most of my work on fractional calculus; but it's masterful as a bridge between these operations. I am going to refer to this as Ramanujan; and "little circle method" mathematicians would look at it.

Let's let the Matrix \(A\) be a non-singular matrix; so that \(A^{-1}\) exists. To be simple; let's assume that \(A : \mathbb{C}^n \to \mathbb{C}^n\). And let's write that:

\[
e^{Ax} = \sum_{n=0}^\infty A^n \frac{x^n}{n!}\\
\]

We can safely assume that \(e^{Ax} : \mathbb{C}^n \to \mathbb{C}^n\). Where \(x\) produces a semi-group structure. From here, we can write:

\[
\frac{d^s}{dx^s} e^{Ax} = A^s e^{Ax}\\
\]

Where this is a linear operator applying \(\mathbb{C}^n \to \mathbb{C}^n\). We can set \(x=0\); and I'm just rewriting Ramanujan's master theorem as he wrote it:

\[
\Gamma(s) A^{-s} = \int_0^\infty e^{-Ax}x^{s-1}\,dx\\
\]

And we've fractionally iterated the matrix \(A: \mathbb{C}^n \to \mathbb{C}^n\). I avoided a lot of "singular moments" here. But if we can map the matrix \(A\) well enough; this discussion is entirely rigorous. It relates Daniel's work; Sheldon's work; Bo's work; Tommy's work; and all that matrix shit in a quantum physics./hilbert space shit.

Fractional calculus is just:

\[
\frac{d^s}{dA^s} : \mathbb{C}_{\Re(s) > 0} \times \mathcal{H} \to \mathcal{H}\\
\]

Where

\[
\mathcal{H} = \{ A : \mathbb{C}^n \to \mathbb{C}^n\,|\, A^{-1} :\mathbb{C}^n \to \mathbb{C}^n\}\\
\]

Then:

\[
\frac{d^s}{dA^s} e^{Ax} = x^s e^{Ax}
\]

And we can differentiate across \(A\) or \(x\); and still have the same rules.

This is the language I think in; and it's just a translation of much of the standard "tetration forum" language; and current literature.

In theory yeah.

But in practice ...

Differentiating a noninteger amount of times with respect to a nontrivial infinite square matrix that might not be diagonalizable ?!
That gives a non-unique infinite tensor with divergent norm ?! 

Or did you mean differentiating with respect to a vector ?

And that is just the last line of your answer.


regards

tommy1729
Reply
#9
(03/31/2023, 07:19 PM)tommy1729 Wrote:
(03/29/2023, 07:35 AM)JmsNxn Wrote: I thought I'd add here a relationship between "iterated matrices" and "iterated derivatives". This is a secondary thought to most of my work on fractional calculus; but it's masterful as a bridge between these operations. I am going to refer to this as Ramanujan; and "little circle method" mathematicians would look at it.

Let's let the Matrix \(A\) be a non-singular matrix; so that \(A^{-1}\) exists. To be simple; let's assume that \(A : \mathbb{C}^n \to \mathbb{C}^n\). And let's write that:

\[
e^{Ax} = \sum_{n=0}^\infty A^n \frac{x^n}{n!}\\
\]

We can safely assume that \(e^{Ax} : \mathbb{C}^n \to \mathbb{C}^n\). Where \(x\) produces a semi-group structure. From here, we can write:

\[
\frac{d^s}{dx^s} e^{Ax} = A^s e^{Ax}\\
\]

Where this is a linear operator applying \(\mathbb{C}^n \to \mathbb{C}^n\). We can set \(x=0\); and I'm just rewriting Ramanujan's master theorem as he wrote it:

\[
\Gamma(s) A^{-s} = \int_0^\infty e^{-Ax}x^{s-1}\,dx\\
\]

And we've fractionally iterated the matrix \(A: \mathbb{C}^n \to \mathbb{C}^n\). I avoided a lot of "singular moments" here. But if we can map the matrix \(A\) well enough; this discussion is entirely rigorous. It relates Daniel's work; Sheldon's work; Bo's work; Tommy's work; and all that matrix shit in a quantum physics./hilbert space shit.

Fractional calculus is just:

\[
\frac{d^s}{dA^s} : \mathbb{C}_{\Re(s) > 0} \times \mathcal{H} \to \mathcal{H}\\
\]

Where

\[
\mathcal{H} = \{ A : \mathbb{C}^n \to \mathbb{C}^n\,|\, A^{-1} :\mathbb{C}^n \to \mathbb{C}^n\}\\
\]

Then:

\[
\frac{d^s}{dA^s} e^{Ax} = x^s e^{Ax}
\]

And we can differentiate across \(A\) or \(x\); and still have the same rules.

This is the language I think in; and it's just a translation of much of the standard "tetration forum" language; and current literature.

In theory yeah.

But in practice ...

Differentiating a noninteger amount of times with respect to a nontrivial infinite square matrix that might not be diagonalizable ?!
That gives a non-unique infinite tensor with divergent norm ?! 

Or did you mean differentiating with respect to a vector ?

And that is just the last line of your answer.


regards

tommy1729

Oh yes! Tommy! I apologize; by "A" I meant non-singular; which implies diagonalizable. I can write the math for you if you'd like. But every FINITE diagonalizable matrix can be differentiated as:

\[
A^s e^{Ax} = \frac{d^s}{dx^s} e^{Ax}\\
\]

But we must choose a path of the differintegral so that it converges. For example; let:

\[
A x_j = \lambda_j x_j\\
\]

Where \(x_j\) is an eigenvector, and \(\lambda_j\) is an eigenvalue. And find a path \(\gamma\) such that \(\gamma(0) = 0\) and \(\gamma(\infty) = \infty\). Then assuming that:

\[
\int_\gamma \left|e^{-\lambda_j x}\right|\,dx < \infty\,\,\text{for all}\,\, 1 \le j \le n\\
\]

Then the differintegral always converges. Finding \(\gamma\) can be tricky. But if for the sake of the argument we assume that \(-\pi/2 < -\kappa< \arg(\lambda_j) < \kappa < \pi/2\); then choosing \(\gamma = [0,\infty]\) works fine. It gets much much trickier in a more general sense; but the idea still holds--at least in a general sense.

The differentiation by \(A\); is a formal operation. But it is entirely rigorous, though not common notation. If I take a function:

\[
f(A) = \sum_{k=0}^\infty f_k A^k\\
\]

Then:

\[
\frac{d}{dA} f(A) = \sum_{k=1}^\infty kf_k A^{k-1}\\
\]

Since \(A\) is an n'th order matrix; by the Cayley-Hamilton theorem; this is still a finite polynomial; and we're just differentiating a polynomial in \(A\).

The idea is that \(\frac{d^s}{dA^s}\) exists in the dual space; where as \(\frac{d^s}{dx^s}\) exists in the normal space. I could never be bothered to work out too many of the details; but much of it holds weight in numerical calculations.

You'll probably see this more often in functional analysis but the operation:

\[
x^{-s} e^{Ax} = \frac{1}{\Gamma(s)} \int_0^\infty e^{-Ax}A^{s-1}\,dA\\
\]

Is a perfectly valid operation.

Also, I apologize; I was mostly just spitballing a lot. So I may have screwed up some details. But the idea looks a lot like this--but still, roughly looks like this.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Fractional tetration method Koha 2 6,026 06/05/2025, 01:40 AM
Last Post: Pentalogue
  ChatGPT checks in on fractional iteration. Daniel 0 3,419 05/17/2023, 01:48 PM
Last Post: Daniel
  Fractional Integration Caleb 11 13,869 02/10/2023, 03:49 AM
Last Post: JmsNxn
  Discussing fractional iterates of \(f(z) = e^z-1\) JmsNxn 2 4,737 11/22/2022, 03:52 AM
Last Post: JmsNxn
  Fibonacci as iteration of fractional linear function bo198214 48 55,700 09/14/2022, 08:05 AM
Last Post: Gottfried
  The iterational paradise of fractional linear functions bo198214 7 9,931 08/07/2022, 04:41 PM
Last Post: bo198214
  Describing the beta method using fractional linear transformations JmsNxn 5 8,639 08/07/2022, 12:15 PM
Last Post: JmsNxn
  Apropos "fix"point: are the fractional iterations from there "fix" as well? Gottfried 12 15,050 07/19/2022, 03:18 AM
Last Post: JmsNxn
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 53,114 06/11/2022, 12:24 PM
Last Post: tommy1729
  [exercise] fractional iteration of f(z)= 2*sinh (log(z)) ? Gottfried 4 9,467 03/14/2021, 05:32 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)