Jabotinsky IL and Nixon's program: a first categorical foundation
#1
This follows from the discussion held at: Some "Theorem" on the generalized superfunction, (May 07, 2021), Tetration Forum.



Quote:Such that \( F(z,F(z',\xi)) = F(z+z',\xi) \). A little unused theorem on this forum, is classifying this superfunction as a flow.

\(
\frac{d}{dz} F(z,\xi) = \lim_{h\to 0} \frac{F(z+h,\xi) - F(z,\xi)}{h} = \lim_{h\to 0} \frac{F(h,\xi) - \xi}{h} \bullet F(z,\xi) \bullet \xi\\
\)


I like to call this object,

\(
\log (F ; \xi) = \lim_{h\to 0} \frac{F(h,\xi) - \xi}{h}
\)

Which can be referred to as the logarithm of the super function; or traditionally it's known as a generator of a flow map. Now, for every super function there is one logarithm, and for every logarithm there is one superfunction. There is no obvious connection between the logarithm and the initial function \( F(1,\xi) = f(\xi) \); but you can derive a formula for it.


Ok... I was reading this point of James' post in the thread linked above and something just clicked in my brain. The downside of working on the elementary building blocks of iteration theory is that I'm lagging some light-years behind all the Sheldon's cutting edge computations, Trappmann and Kouznetsov's method and all of the James' holomorphic witchcraft. But I can see some light. I'm now strongly convinced that the composition integral program is the right way.
The following is a preliminary meditation on the partial derivatives of flows and how to rephrase the key ingredients in categorical terms. I'm excited by this because this is the first time I'm able to embed some differential features into my algebraic framework.

This is not (only) an empty philosophical post about notations and definitions. I tried to generalize the factorization of the partial derivatives of a flow in terms of Jabotinsky Iterative Logarithm and the flow itself, derived from this the property that makes the partial differential a natural transformation and added some exotic corollaries from that.


SOME NOTATIONAL LUNA PARK
Remember the definition of a \( T \)-time dynamical system on X (or \( T \)-system): it is just that of a left monoid action over the set X. Here \( (T,0_T, +_T) \) is the monoid of time (better if commutative or even better if abelian group) written in additive notation. When the monoid is the group of real numbers we call it flow over X. When X is a vector space and the action is linear then we have a linear representation of \( T \).

def 1a. A T-action is a map \( f:T\times X\to X \) s.t. \( f(0_T,x)=x \) and \( f(t+_T t')=f(t,f(t',x)) \).

Observation. The definition seems abstract and arbitrary... but it is not really. Remember the existence of the curry isomorphism. It is the isomorphism of "curryng a variable". It shows that the set of functions \( A\times B\to X \) is "essentially the same" as the set of functions of functions \( A\to X^B \). The isomorphism is given by the function "curry the B variable"

\( {\rm curry}_B :X^{(A\times B)} \overset{\simeq}{\longrightarrow} (X^B)^A \) where \( {\rm curry}_B :g \mapsto \overline{g} \) and  \( (\overline{g}(a))\,(b):=g(a,b) \)

[Image: HELL-0.jpg]

def 2. It is now evident that \( f(t,x) \)  is a T-action over X if and only if \( \overline{f}:T\to X^X \) is a monoid homomorphism. Observe that  \( X^X \) is also a monoid by composition. if f is a T-action we have \( \overline{f}(0_T)={\rm id}_X \) and

\(
\forall x\in X.\, (\overline{f}(s+_T t))\,(x)=f(s+_T t,x)\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad=f(s,f(t,x))\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad=\overline{f}(s)(\overline{f}(t)(x))\\
\forall x\in X.\, (\overline{f}(s+_T t))\,(x)=(\overline{f}(s)\circ \overline{f}(t))\,(x)\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\overline{f}(s+_T t)=\overline{f}(s)\circ \overline{f}(t)\\ \)

Slogan. Monoid actions are monoid homomorphisms.

Notation. Let  \( f:A_0\times A_1\times ...\times A_n\to X \) be a function enough well behaved over some sets where all the differential structure is on the right place. I write  \( \partial_{A_i}=\partial_i \)  for the partial derivative in the i-th variable. We get new functions \( (\partial_i f):A_0\times A_1\times ...\times A_n\to X \)

\( (\partial_i f)(a_0,...,a_n)=\lim_{h\to 0}\frac{f(a_0,...,a_i+h,...,a_n)-f(a_0,...,a_n)}{h} \)

JOURNEY to CATEGORICAL HELL
Following JmsNxn. I guess it is clear that if our monoid of time has some topological/differential structure (like R or C actions) we can consider the partial derivative of the monoid-action/dynamical system. Let  \( f:T\times X\to X \) be a T-action (differentially well behaved) consider the partial derivative respect to the T variable \( \partial_T \). As James writes

\( \lim_{h\to 0}\frac{f(t+_Th,x)-f(t,x)}{h}=\lim_{h\to 0}\frac{f(h,f(t,x))-f(t,x)}{h} \)

At this point just notice that \( (\partial_T f)(0_T,x)=\lim_{h\to 0}\frac{f(0_T+_T h,x)-f(0_T,x)}{h}=\lim_{h\to 0}\frac{f(h,x))-x}{h} \) (this is called Jabotinsky's iterative logarithm \( {\rm ilog}(f)(x)=(\partial_T f)(0_T,x) \)),  thus we have a

Quote:"Generator lemma" 1.\( (\partial_T f)(t,x)={\rm ilog}(f)(f(t,x)) \)

Observe that fixing the variable \( x=\xi \) gives us James' differential equation \( y'(t)=h(y(t));\, y(0_T)=\xi \).
If instead we unleash the curry isomorphism we can render the generator lemma as a factorization of single-valued functions.
\( \overline{\partial_T f}(t)=\overline{\partial_T f}(0_T)\circ \overline{ f}(t) \)
[Image: HELL-1.jpg]
Maybe this lemma is not important but it has interesting consequences if one wants to translate all of this into category theory. With much boldness and unfounded certainty I declare theorem the following harmless corollary.

Quote:Theorem 1.  For all \( s,t\in T \) we have
\( \overline{\partial_T f}(s+_T t)=\overline{\partial_T f}(s)\circ \overline{ f}(t) \)
proof: \( \overline{\partial_T f}(0_T)\circ (\overline{ f}(s)\circ \overline{ f}(t))=(\overline{\partial_T f}(0_T)\circ \overline{ f}(s))\circ \overline{ f}(t) \)
[Image: HELL-2.jpg]
Mutiplication corollary. Let n be a natural number \( \overline{\partial_T f}((n+1)t)=\overline{\partial_T f}(t)\circ \overline{ f}(t)^{\circ n}=\overline{\partial_T f}(0)\circ \overline{ f}(t)^{\circ n+1} \)

Weird metric-like corollary. If T is a group, and \( \overline{\partial_T f}(s):X\to X \) is invertible then
\( \overline{ f}(s+_T t)=\overline{\partial_T f}(-s)^{-1}\circ \overline{\partial_T f}( t) \)  
[Image: HELL-3.jpg]
I know, all this seems harmless and empty but it isn't. This equation shows that \( \overline{\partial_T f} \) defines a natural transformation from the T-action f, seen as a functor from the time-monoid to the state-space (category), to the identity T-action, aka the "still" dynamic on X.

To make this clear I invite the reader to endure a little of abstract madness.

Def 3. Given the monoid T we can build an interesting category \( T / \bullet \) (the category of the right traslations of T): begin by adding a bunch of points: one point for evey element \( t\in T \).
Now the arrows: between the points  \( r, s\in T \) we draw an arrow \( r\to s \) if there is an element \( t\in T \) such that   \( r=s+_T t \). Let's label that arrow with the name of the element t. We can depict the arrows in the two equivalent ways
[Image: HELL-4.jpg]
Composition is easily defined
[Image: HELL-4b.jpg]

What does \( T / \bullet \)  have to do with T-dynamical systems? Remember that a T-action f over X is equivalently, by currying, a monoid morphism \( \overline{f} \)  that assigns to every time t an endomorphism of X (in the forum we'd like to see this endomap as the t-th iterate of the dynamics). We can see that  \( \overline{f} \), as  T-action, defines automatically a corresponding functor from \( T / \bullet \) to the category where X lives. Let's denote it, yep abusing the notation, as \( \overline{f} \).

Def 4.  \( \overline{f}:T / \bullet\to {\rm Set} \) can be seen as a functor sending all the objects of \( T / \bullet \) to the same object X and every arrow  \( r\overset{t}{\longrightarrow} s \) to the function  \( \overline{f}(t):X\longrightarrow X \). Functoriality is guaranteed by the definition of dynamical system (T-action).

Let's visualize that!!: on the left (in red) live the objects and arrows of \( T / \bullet \) (the domain category) and on the right (in black) I display (some of) the objects (just one) living in the target category.
[Image: HELL-5.jpg]

Now consider the trivial T-action over X. It is the dynamics of the identity map of X: call it  \( \overline{{id_X}} \) where for every t we get the identity. As we have seen above, it defines a functor \( \overline{{id_X}}:T / \bullet\to {\rm Set} \).

Quote:The real theorem 1. Given a T-action over X as a functor \( \overline{f}:T / \bullet\to {\rm Set} \), the partial derivative in the variable T of the T-action \( \overline{\partial_Tf} \) is a natural transformation. Its component at the identity element of the monoid is the iterative logarithm of the dynamical system.
In symbols: for \( r,s,t\in T \) s.t. \( r=s+_T t \), i.e. for all the arrows  \( r\overset{t}{\longrightarrow} s \), we have
\( \overline{\partial_T f}_r=\overline{\partial_T f}_s\circ \overline{ f}(t) \) and \( \overline{\partial_Tf}_{0_T}={\rm ilog}(f) \)
[Image: HELL-6.png]


The proof IS the proof of theorem 1 because that is exactly the condition that natural transformations have to satisfy by definition. We can visualize it as we did for the functor.
[Image: HELL-7.jpg]

I know, now this seems scary and disturbing but with natural transformation I mean the same  kind of objects I was defining on the post on generalized superfunction tricks and that's why iterated composition fits so well into this. I will expand on this in another post but let's just observe that the partial derivative satisfies the functional equation
\( \phi(\rho_T(t,s),x)=\phi(s,f(t,x)) \)


TROPHIES FROM HELL?
Let's ground this with concrete objects. The problem with considering general T-monoid actions (T-time dynamics) is that our time object T does not know what our initial function is, i.e. monoid actions don't know what process is being iterated. Monoids do not know what is the "quantum of time" because they have zeroes but not units, unlike unitary rings. We need rings to have units and to recover the initial function as \( f(1_T,x)=f(x) \).
So let's  consider the field of real numbers as our time and let's just restrict to real analysis. The theorem 1 give us a corollary.

Corollaries. i) Let n be a natural number \( \overline{\partial_{\mathbb R} f}(n+1)=\overline{\partial_{\mathbb R} f}(1)\circ f^{\circ n} \)
ii) For every real number s, \( \overline{\partial_{\mathbb R} f}(s+1)=\overline{\partial_{\mathbb R} f}(s)\circ f \)
The proof is trivial.

The last one can be rewritten as \( \partial_{\mathbb R} f(s+1,x)=\partial_{\mathbb R} f(s,f(x)) \)
In this simplified setting it is easier to appreciate the nature of this equation because we can clearly see that it's a natural transformation between two diagrams that have a familiar shape (that of the real numbers' linear order \( \ge \)). Here the picture shows a bit of the infinite diagram of sets and functions
[Image: HELL-8.jpg]

We obtain \( \overline{\partial_{\mathbb R} f}\in [\overline{f},id_X]^{op}_{\Delta} \) where this set is defined as \( [h,g]^{op}_{\Delta}:=\{\phi_n\,|\, g\circ \phi_{n+1}=\phi_n \circ h\} \)

These are the same sets defined  for the Generalized superfunction trick in this thread


When this intution solidifies we could start to consider topological monoids... continuous paths as diagrams and, after I find the right categorical formulation of paths partition and limit, rebuild algebraically James' composition integral.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#2
Very interesting stuff, Mphlee!

It'll take me awhile to reread and absorb what you've written; but is it possible to do something similar in the non-abelian case?

By this I mean, when we write,

\(
y' = \text{ilog}(y)\\
y(0) = z\\
\)

We are inducing an abelian group. But if we change into,

\(
y' = f(s,y)\\
\)

Then writing this,

\(
\int_b^c f(s,z)\,ds\bullet \int_a^b f(s,z)\,ds \bullet z = \int_a^c f(s,z)\,ds\bullet z\\
\)

Forms a non-abelian group across composition. Would we be able to talk about \( \{T,0_T,+_T\} \) (or something like that), but with a non abelian \( +_T \)? I think you should get something similar because,

\(
\lim_{h\to 0} \frac{\int_a^{x+h} f(s,z)\,ds \bullet z - \int_a^x f(s,z)\,ds \bullet z}{h} = \lim_{h\to 0} \frac{\int_x^{x+h} f(s,z)\,ds \bullet z - z}{h} \bullet \int_a^x f(s,z)\,ds \bullet z\\
\)

And the term,

\(
\lim_{h\to 0} \frac{\int_x^{x+h} f(s,z)\,ds \bullet z - z}{h} = f(x,z)\\
\)

Which is just the fundamental property of the compositional integral.



EDIT:

This is also very similar to what I was trying to say with closed contours, Mphlee. When you take an integral about a singularity; it looks abelian (upto conjugation). And you can decompose it in such a manner that everything looks abelian...upto conjugation. Which is how I described my property:

\(
\int_\tau f(s,z)\,ds\bullet \int_\gamma f(s,z)\,ds\bullet z = \int_\varphi f(s,z)\,ds\bullet \int_\tau f(s,z)\,ds\bullet z\\
\)

Where \( \gamma,\varphi \) are Jordan curves (just think they're a closed contour), and \( \tau \) is just an arc. So create an equivalence class,

\(
\int_\gamma f(s,z) \, ds\bullet z \simeq \int_\varphi f(s,z)\,ds\bullet z\\
\)

If they are conjugate similar. It turns out, each conjugate class depends ONLY ON WHAT SINGULARITIES ARE WITHIN THE CONTOUR. And then we mod out by this equivalence class; we get an abelian group and we're back to a discussion of \( y' = \text{ilog}(y) \) (at least in spirit, I tried to draw this out as best I could). And even better; we can decompose,

\(
\int_\gamma f(s,z)\,ds\bullet z = \Omega_j \text{Rsd}(f,\zeta_j;z)\bullet z\\
\)

For the list of singularities \( \zeta_j \) within the contour \( \gamma \). Which means, for SOME sequence of closed contours \( \gamma_j \) which only encircle \( \zeta_j \);

\(
\int_\gamma f(s,z) \,ds\bullet z = \Omega_j \int_{\gamma_j} f(s,z)\,ds\bullet z\\
\)

Where, \( \int_{\gamma_j} f(s,z)\,ds\bullet z \in \text{Rsd}(f,\zeta_j) \). But! we can also rearrange this, because once you mod out, it's effectively abelian. If I reindex the singularities \( \zeta_i = \zeta_{\sigma(i)} \); there exists another sequence of closed contours \( \varphi_i \) which only encircle \( \zeta_i \) in which:

\(
\int_\gamma f(s,z) \,ds\bullet z = \Omega_i \int_{\varphi_i} f(s,z)\,ds\bullet z\\
\)

To such an extent, I can apply \( \int_{\varphi_i^{-1}} f(s,z)\,ds\bullet z \) and I get:

\(
\int_\gamma f(s,z) \,ds\bullet \int_{\varphi_i^{-1}} f(s,z)\,ds\bullet z = \Omega_{j\neq i} \text{Rsd}(f,\zeta_j;z)\bullet z\\
\)

This was essentially the thesis of that paper; because once you do that you can do a lot of stuff you do in traditional analysis, but with compositions and compositional integrals. Basically; everything you get from Cauchy's residue theorem; you get a slightly more difficult to handle equivalent.

If I'm understanding you correctly; from what you've written here; this is directly the second chapter of The Compositional Integral: The Narrow and The Complex Looking Glass and the fifth section An additive to composition homomorphism. Except, you have entirely rephrased what was just a couple pages into a very different flavour using your fancy arrows, lol. The rest of the book essentially just tries to make this homomorphism exist in more extravagant scenarios by "modding out." But this looks really pretty, how you're writing it. And it definitely breaks it down into its atoms much more. If anything it's a sigh of relief that someone else is beginning to see what I see, lol!

The difficulty I've always had though; is that it's not very obvious how, let's say, tetration is related to it's i-logarithm. Or how they behave. I have existence; but I don't really know any algorithm to relate the two in a nice readable manner. Or, any way, say, where we can hunt for a logarithm and then just plug and play to get tetration.

I know there is SOME function such that,

\(
\exp^{\circ s}(\xi) = \int_0^s h(\xi)\,dw\bullet \xi\\
\)

But, there's no obvious way to find that (and trust me, I've tried). The best I could do is attempt to related these things to contour integrals. But I couldn't figure out any kind of decision process, where we construct a specific \( h \) associated to what you're calling \( \overline{f} \). I was only able to sort of classify what these constructions WOULD look like; and their behaviour in complicated scenarios. But nothing like a decision process.

Anyway; you've given me much to think about Mphlee. I'll try and digest your graphs better; but I think I'm getting what you're saying much more clearly; especially as I reread.
Reply
#3
omg... I don't feel ready for the integrals yet but... something is moving in the back of my head. I need to reread all of your posts at least 10 times more.
The bigger obstacle is that I'll need to study what contours, Jordan Curves and singularities really are... I need to study many key parts of your papers. But I'll get there.

Now I'd like to answer to all the points. Tonight or tomorrow I'll do it.
Now I have time only to say this: the setting is already as general as can be in this sense: I never assumed somewhere that the monoid T is a group, let alone being commutative.
All of this holds for abstract monoids T as long as we have a theory that makes us able to define the partial derivative operator (differentiating functions taking values in the monoid): I used additive notation only as a philosophical aid. I'll get back on this. In the proofs the only facts I use of T are  the existence of the identity (that's how we can get the infinitesimal generator factorization) and associativity (that's how we derive theorem 1).(*)

The eventuality that the time is not commutative deserves a longer comment. In fact I have something on this. There is (trivial) canonical way to extend every N-action (integer iteration) to an E-action where E is a new monoid. E is the monoid of what I like to call "the functions with rank=1 relative to the initial function".

The interesting part btw is that If I'll be able to finally understand how the integrals work I could come up with a monoid of paths (arcs? A category of paths? The fundamental groupoid of the time-object?) over a top-space and maybe we could rebuild all of this theory of the composition integral in term of homotopy theory... big maybe...



(*)of course if for classical partial derivative to exists T needs to be assumed commutative that's another story.


Edit.
I went on wiki looking for some basic path-contour-residue theorem business. I won't annoy you with "first -course-on-complex-analysis" questions. But it is interesting how it seems that we can define a category of paths, where objects are points of the plane and arrows are oriented paths (curves): it looks promising because monotonicity of the parametrization boils down to functoriality (imho) and considering equivalence classes modulo parametrization boils down to modding out by homotopy (I guess). It is unbelievable to me how the value of the integral on a closed curve does not depends on the parametrization chosen, how it only depends on the set of singularities that it encloses, and that it doesn't changes when you permute them... I'm not gonna lie..to me this is black magic.

Edit 2. That's amazing... it is beginning to make some sense.  When the domain is simply connected the functions that loops around a point contract themselves to identity (just like paths homotopic to the point) but if the domain is punctured (by the singularity) they can't contract just as the circle (a punctured disk) is not topological equivalent to the disk (that is really a point) because it has an hole! It's so marvelous... the residue classes (in your theory) are class of conjugate functions. There are many doubts but the bigger now is: how you define \( {\rm Rsd}(f,\zeta;z) \)?

Edit 3. I went back to your old post in the bullet notation thread. That method really comes from Euler method as you say... It is curious that there is no known link between the infinitesimal generator f and the solutions to y'=f(y). 
In your anulus example you path \rho realizing that conjugation of the two paths gamma is the interval [-\delta,\delta]. What happens if you measure the superfunctions of two elements of the residue class by some invariants attached to the paths (like the length)? Maybe finding the shortest path/curve?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#4
(05/11/2021, 12:37 PM)MphLee Wrote: omg... I don't feel ready for the integrals yet but... something is moving in the back of my head. I need to reread all of your posts at least 10 times more.
The bigger obstacle is that I'll need to study what contours, Jordan Curves and singularities really are... I need to study many key parts of your papers. But I'll get there.

........

Edit 2. That's amazing... it is beginning to make some sense.  When the domain is simply connected the functions the loops around a point  contracts themselves to identity (just like paths homotopic to the point) but if the domain is punctured (by the singularity) they can't contract just as the circle (a punctured disk) is not topological equivalent to the disk (that is really a point) because it has an hole! It's so marvelous... the residue classes (in your theory) are class of conjugate functions. There are many doubts but the bigger now is: how you define \( {\rm Rsd}(f,\zeta;z) \)?

Edit 3. I went back to your old post in the bullet notation thread. That method really comes from Euler method as you say... It is curious that there is no known link between the infinitesimal generator f and the solutions to y'=f(y). 
In your anulus example you path \rho realizing that conjugate the two paths gamma is the interval [-\delta,\delta]. What happens if you measure the superfunctions of two elements of the residue class by some invariants attached to the paths (like the length)? Maybe finding the shortest path/curve?

The class \( \text{Rsd}(f,\zeta;z) \) is very readily just defined as: (this is taken word for word from the paper)




The residual class \( \text{Rsd}(f,\zeta;z) \) of a meromorphic function \( f: \mathcal{S} \times \mathbb{C} \to \widehat{\mathbb{C}} \) at a pole \( \zeta \) is defined to be:

\(
\text{Rsd}(f,\zeta;z) \ni \int_\gamma f(s,z)\,ds\bullet z\\
\)

Where:

    \( \gamma:[a,b] \to \mathcal{S} \) is a Jordan curve oriented positively.
    \( \zeta \) is the only pole enclosed within \( \gamma \).




When describing the conjugate property; it's difficult to explain it over the forum; as I spend about 20 pages building up a theory, lol. But it works differently than Cauchy because everything is non-abelian.  The first thing is that the residual IS NOT independent of parameterization about a singularity. But it ALMOST is. The function:

\(
F(z) = \int_\gamma f(s,z)\,ds\bullet z\\
\)

Depends on the initial and final point of \( \gamma \). So the value \( \gamma(a) = \alpha = \gamma(b) \)--\( F \) depends on \( \alpha \), and the singularities within. We can of course reparameterize with a \( \gamma(\tau) \); this is no problem; but the integral will change depending on where our starting and endpoints are.

So say I have two contours \( \gamma,\varphi \) where \( \varphi(a) = \beta = \varphi(b) \) is the start/end point of \( \varphi \) (and they both encircle the same poles). Then take an arbitrary arc \( \sigma \) (which doesn't encircle any poles) such that \( \sigma(0) = \alpha \) and \( \sigma(1) = \beta \). Then this is our conjugate function. And luckily; if you take another arc \( \sigma' \) which satisfies the same initial point/end point (and doesn't encircle any poles):

\(
\int_{\sigma} = \int_{\sigma'}\\
\)

So the conjugate function IS UNIQUE; if it's written as a compositional integral about \( f(s,z) \).




Another way to think of the Residual class is; about a singularity; a family of functions,

\(
\text{Rsd}(f,\zeta) = \{F(\alpha,z)\,|\,F(\alpha,z) : \mathcal{S}/\{\zeta} \times \mathbb{C} \to \mathbb{C}\}\\
\)

And each compositional integral about an arc \( \sigma \); if it doesn't encircle any poles; only depends on the initial point \( \alpha \) and it's endpoint \( \beta \). We can surely think of this as an arrow \( \alpha \to \beta \). Then,

\(
(\alpha \to \beta) \bullet F(\beta,z) \bullet (\beta \to \alpha) \bullet z = F(\alpha,z)\\
\)

This is another way of thinking about it. Or if you prefer,

\(
\int_\beta^\alpha f(s,z)\bullet F(\beta,z) \bullet \int_{\alpha}^\beta f(s,z)\,ds\bullet z = F(\alpha,z)\\
\)

And we can choose whatever path we want from \( \beta \) to \( \alpha \) (as long as it doesn't encircle any poles).



Just as a brief FYI; there is a lot of things from Cauchy's analysis we inherit with the compositional integral; but we don't inherit everything. And a lot of the times, when we inherit something, we lose a couple properties because everything is non-abelian. Just be sure to remember that as you read about contour integration; there are somethings we begin to lose.

But the power player we get is The Compositional Integral Theorem; which is equivalent to Cauchy's integral theorem. Which is, if \( \phi(s,z) : \mathcal{S} \times \mathbb{C} \to \mathbb{C} \) is holomorphic, and \( \mathcal{S} \) is simply connected. Then for all jordan curves \( \gamma \),

\(
\int_\gamma \phi(s,z)\,ds\bullet z = z\\
\)

Everything in my book is essentially built around this theorem.
Reply
#5
OK, I'll take time to digest this. But to be clear, I don't still understand then how can you use the omega notation on that set you define.
If the first is a set of complex numbers, the c-integrals evaluated at z. And the second set you define, the one depending only on the function and the singularity chosen is a set of functions...
Quote:\(
\int_\gamma f(s,z)\,ds\bullet z = \Omega_j \text{Rsd}(f,\zeta_j;z)\bullet z\\
\)

how can you put that object inside the omega notation? As z varies we get a bunch of sets of functions \(
z\mapsto \text{Rsd}(f,\zeta_j;z)\subseteq \mathbb C\\
\) how do you compose them in the z?
That is: let \( {\rm loops}(\zeta_j):=\{\gamma\,|\,\gamma \,\text{is Jordan, p. oriented, closed and encloses only}\zeta_j \} \)
Do we have \( \text{Rsd}(f,\zeta_j):=\{\int_\gamma f(s,z)\,ds\bullet -:{\mathbb C}\to{\mathbb C}\,|\,\gamma \in {\rm loops}(\zeta_j) \} \) and

\( \text{Rsd}(f,\zeta_j;z):=\{\int_\gamma f(s,z)\,ds\bullet z\in {\mathbb C}\,|\,\gamma \in {\rm loops}(\zeta_j) \} \)?


Maybe it's just an abuse of notation? When I see that Omega notation I picture it as

[Image: Annotazione-2021-05-11-224552.jpg]

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#6
(05/11/2021, 09:48 PM)MphLee Wrote: OK, I'll take time to digest this. But to be clear, I don't still understand then how can you use the omega notation on that set you define.
....


Maybe it's just an abuse of notation? When I see that Omega notation I picture it as

[Image: Annotazione-2021-05-11-224552.jpg]

YES! That's absolutely it Mphlee!

Everything you are saying is correct. That's exactly how you should visualize it.

It is not an abuse of notation, for the simple reason we are MODDING OUT by the equivalence relation,

\(
\int_\gamma \simeq \int_\varphi\\
\)

The fact that this modding out works so well, is because of the following.

There is some representatives \( \gamma_1,\gamma_2 \) of \( \text{Rsd}(f,\zeta_1),\text{Rsd}(f,\zeta_2) \) respectively, such that,

\(
\int_\gamma f(s,z) = \int_{\gamma_1} \bullet \int_{\gamma_2}\\
\)

BUT, there are also some representatives \( \gamma_1^*,\gamma_2^* \) of \( \text{Rsd}(f,\zeta_2),\text{Rsd}(f,\zeta_1) \) respectively, such that,

\(
\int_\gamma f(s,z) = \int_{\gamma_1^*} \bullet \int_{\gamma_2^*}\\
\)


So when I write,

\(
\int_\gamma f(s,z)\,ds\bullet z = \Omega_{j} \text{Rsd}(f,\zeta_j;z)\bullet z\\
\)

I mean, there is SOME representative of each Rsd, where this is true. Think about modular arithmetic.

\(
x \equiv y\,\,(\text{mod} \,m)\\
\)

means there is SOME k such that,

\(
x-y = km\\
\)

It doesn't mean for all k.

So when I write \( \text{Rsd} \); I'm assuming we're in the modded out space. And when we pull back, there is SOME representative from this set where it is true. You can see this even in your fancy picture. Just reorganize the curves to put different singularities first.
Reply
#7
Oh... that's makes so much sense 0.0
Good, thank you for the explanations!

Ok, when I have more time I'll go back on the non-abelian time monoid and I'll link it up to the non-commutativity of path composition.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#8
God, I love your explanations!

I just reread everything you've written; and it's so god damned on the nose. So much of what I take for granted is developed deeper. Thanks again for this observation, Mphlee. If anything, it just rewrites everything I was saying; but in this beautiful categorical language I never could've come up with. People have always asked me about the relationship of the compositional integral to Feynman Diagrams; and this honestly, reminds me so much of Feynman Diagrams.  And I hate Feynman Diagrams (mostly because it doesn't click). How you're explaining everything clicks so much more. Again, I don't talk about this here, but you can enter in solutions to Schrodinger's equation in these compositional integrals... The way you're writing these diagrams; it just screams at me a formalism of feynman diagrams without the blowups to infinity. But maybe I'm getting ahead of myself, lol 0.0

Thank you times a thousand Mphlee. I'm understanding this even better by reading your interpretation. I think this is a very important translation that needs to be done; the compositional integral into functor/diagram language. I can't do it. But I think you can.

Regards, James

EDIT:

Just wait until you get to the last chapter of my book when I talk about fourier transforms over equivalence classes; lmfao.


Attached Files Thumbnail(s)
   
Reply
#9
(05/11/2021, 10:05 PM)JmsNxn Wrote:
(05/11/2021, 09:48 PM)MphLee Wrote: OK, I'll take time to digest this. But to be clear, I don't still understand then how can you use the omega notation on that set you define.
....


Maybe it's just an abuse of notation? When I see that Omega notation I picture it as

[Image: Annotazione-2021-05-11-224552.jpg]

YES! That's absolutely it Mphlee!

Everything you are saying is correct. That's exactly how you should visualize it.

It is not an abuse of notation, for the simple reason we are MODDING OUT by the equivalence relation,

\(
\int_\gamma \simeq \int_\varphi\\
\)

The fact that this modding out works so well, is because of the following.

There is some representatives \( \gamma_1,\gamma_2 \) of \( \text{Rsd}(f,\zeta_1),\text{Rsd}(f,\zeta_2) \) respectively, such that,

\(
\int_\gamma f(s,z) = \int_{\gamma_1} \bullet \int_{\gamma_2}\\
\)

BUT, there are also some representatives \( \gamma_1^*,\gamma_2^* \) of \( \text{Rsd}(f,\zeta_2),\text{Rsd}(f,\zeta_1) \) respectively, such that,

\(
\int_\gamma f(s,z) = \int_{\gamma_1^*} \bullet \int_{\gamma_2^*}\\
\)


So when I write,

\(
\int_\gamma f(s,z)\,ds\bullet z = \Omega_{j} \text{Rsd}(f,\zeta_j;z)\bullet z\\
\)

I mean, there is SOME representative of each Rsd, where this is true. Think about modular arithmetic.

\(
x \equiv y\,\,(\text{mod} \,m)\\
\)

means there is SOME k such that,

\(
x-y = km\\
\)

It doesn't mean for all k.

So when I write \( \text{Rsd} \); I'm assuming we're in the modded out space. And when we pull back, there is SOME representative from this set where it is true. You can see this even in your fancy picture. Just reorganize the curves to put different singularities first.

Tbh Im familar with contour integration , but I did not get much out of that.

Say I take the contour for counting the amount of zero's of (Riemannzeta(s))^z - z^2.

How does that get 4 compositions ?

Im also confused by compositions of integrals.

I mean compositions are used for functions.

integrals are operators and result in values.

I do not know how to interpret composition of integrals or values , apart from maybe fractional calculus.
So Im guessing these integrals have a parameter hence being a function.

Maybe some examples might help.

Im not even sure why we are doing contour integrals in the first place ?
WHY does that matter to iteration theory ?

The Jabotinsky ilog is defined as a derivative and hence yeah we can recover by integration.
But what is the point then ??

Maybe im confused by notations.
Maybe I need to read and think more.
But It does not resonate well at the moment.

regards

tommy1729
Reply
#10
Hey, Tommy

I apologize if this is out of nowhere. This notation was developed in a paper of mine.

Suppose \( \mathcal{S} \subseteq \mathbb{C} \) is a domain. Suppose that \( \phi(s,z) : \mathcal{S} \times \mathbb{C} \to \mathbb{C} \) is a holomorphic function. Suppose that \( \gamma \subset \mathcal{S} \) is an arc.

Then the contour integration (wrt the compositional integral) is written,

\(
\int_\gamma \phi(s,z)\,ds\bullet z\\
\)

The direct way to calculate this is to write the following. Let \( \gamma : [0,1] \to \mathcal{S} \); then first we parameterize,

\(
\int_0^1 \phi(\gamma(x),z)\gamma'(x)\,dx\bullet z\\
\)

Take a descending partition \( 1 =x_0 > x_{1} > x_{2} >...>x_{n-2} > x_{n-1} > x_n = 0 \) with sample points \( x_{j} \ge x_{j}^* \ge x_{j-1} \). Let, \( \Delta \gamma_j = \gamma(x_j) - \gamma(x_{j+1}) \) and \( \gamma_j^* = \gamma(x_j^*) \). Let \( ||\Delta|| = \max_j |\Delta \gamma_j| \). Then the integral can be written,

\(
\int_0^1 \phi(\gamma(x),z)\gamma'(x)\,dx\bullet z = \lim_{||\Delta|| \to 0}\Omega_{j=0}^{n-1} z + \phi(\gamma_j^*,z)\Delta \gamma_j \bullet z\\
\)

Where in more familiar notation, if I write \( q_{jn}(z) = z + \phi(\gamma_j^*,z)\Delta \gamma_j \); this just means,

\(
\int_0^1 \phi(\gamma(x),z)\gamma'(x)\,dx\bullet z = \lim_{n\to\infty} q_{0n}(q_{1n}(...q_{(n-1)n}(z)))\\
\)


This is what I call the Riemann-Stieljtes decomposition; as it looks precisely like the Riemann-Stieljtes integral. This thing has it's place in traditional analysis, where it is better known as Euler's method (though he never used contours as I am). If I write,

\(
Y_{ba}(z) = \int_a^b f(s,z)\,ds\bullet z\\
\)

Then,

\(
\frac{d}{db}Y_{ba}(z) = f(b,Y_{ba}(z))\\
Y_{cb}(Y_{ba}(z)) = Y_{ca}(z)\\
Y_{bb}(z) = z\\
\)

And it respects substitution. Where if \( u(\alpha) = a \) and \( u(\beta) = b \);

\(
\int_a^b f(s,z)\,ds\bullet z = \int_{\alpha}^\beta f(u(x),z)u'(x)\,dx\bullet z\\
\)

This is essentially alternative notation for first order differential equations. So now we're going to do contour integration, but with first order differential equations, rather than a primitive. What we've been talking about here is about closed contours.



if \( \phi(s,z) : \mathcal{S} \times \mathbb{C} \to \mathbb{C} \) is holomorphic, and \( \mathcal{S} \) is simply connected. Then for all jordan curves \( \gamma \),

\(
\int_\gamma \phi(s,z)\,ds\bullet z = z\\
\)


Which is the equivalent of Cauchy's integral theorem.

I'll attach here a table of integrations for some simple cases. Let \( p \) be holomorphic.

\(
\int_\gamma p(s) z \,ds\bullet z = z e^{\int_\gamma p(s)\,ds}\\
\int_\gamma p(s)z^2 \,ds\bullet z = \frac{1}{\frac{1}{z} - \int_\gamma p(s)\,ds}\\
\int_\gamma p(s)z^3 \,ds\bullet z = \frac{1}{\sqrt{\frac{1}{z^2} - 2\int_\gamma p(s)\,ds}}\\
\)


And for example, if we take the unit disk as our contour \( \gamma \) and we let \( |\zeta| < 1 \); we get,

\(
\int_\gamma \frac{p(s)z}{s-\zeta}\,ds\bullet z = z e^{2\pi i p(\zeta)}\\
\int_\gamma \frac{p(s)z^2}{s-\zeta}\,ds\bullet z = \frac{1}{\frac{1}{z} - 2\pi i p(\zeta)}\\
\int_\gamma \frac{p(s)z^3}{s-\zeta} \,ds\bullet z = \frac{1}{\sqrt{\frac{1}{z^2} - 4\pi i p(\zeta)}}\\
\)


I wrote an 80 page treatise analyzing these objects. Mphlee and I are discussing a manner of classifying conjugate classes of holomorphic functions. Which, naively, one would write,

\(
\[F,G\] = \{f \|\, f(F(z)) = G(f(z))\}\\
\)

In this paper I created a class of these functions by using contour integration.




Also, it's important to note that this is a STRICT generalization of Cauchy's contour integration. If I take \( p(s) \) which is constant in \( z \) we get,

\(
\int_\gamma p(s)\,ds\bullet z = z + \int_\gamma p(s)\,ds\\
\)

And now the algebra reduces to the abelian algebra of contour integration as you're used to it. Essentially, you can think of this as non-abelian contour integration Smile.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  [Video] From modular forms to elliptic curves - The Langlands Program MphLee 1 1,382 06/19/2022, 08:40 PM
Last Post: JmsNxn
  Doubts on the domains of Nixon's method. MphLee 1 2,718 03/02/2021, 10:43 PM
Last Post: JmsNxn
  Nixon-Banach-Lambert-Raes tetration is analytic , simple and “ closed form “ !! tommy1729 11 11,231 02/04/2021, 03:47 AM
Last Post: JmsNxn
  Jabotinsky's iterative logarithm bo198214 21 47,466 06/14/2008, 12:44 AM
Last Post: andydude
  Categorical iteration theory andydude 4 13,344 10/20/2007, 07:27 PM
Last Post: andydude



Users browsing this thread: 1 Guest(s)