Holomorphic semi operators, using the beta method
#1
Hey everyone! Some more info dumps!

I haven't talked too much about holomorphic semi-operators for a long time. For this brief exposition I'm going to denote the following:

\[
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
\]

Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyper-operators.

Now there exists a really old thread on here, where using fatou.gp you could get really close to a solution of semi-operators.  Let's let \(b \in \mathfrak{S}\), be in the interior of the Shell Thron region. Let \(\exp/\log\) be base \(b\). Let \(\omega\) be the fixed point assigned.

Then:

\[
x <s> \omega = \exp^{\circ s}(\log^{\circ s}(x) + \omega)\\
\]

Which is holomorphic and allows us to solve for all \(\omega \pm k\) for all \(s\). Now, the idea is that we have to solve implicit equations in log. I've never had a familiarity with this since I've investigated \(\beta\), but it should be doable on the following domain. If you take all forward and backwards iterates of \(\omega \pm k\) for \(\omega \in \mathcal{W}\); which is the domain of the fixed points. You should be able to construct an implicit solution to the equation:

\[
x <s> (x<s+1>y) = x <s+1> y+1\\
\]

For all \(x \in \mathbb{C}/\mathcal{E}\) and \(y \in \mathcal{W} + \mathbb{Z}\)--where \(\mathcal{E}\) is measure zero in \(\mathbb{R}^2\).


I mean, this problem is really solved if you think of it implicitly. We are just varying \(\mu,\lambda\) until we find a solution to the above equation while we freely move \(s\). This is very fucking difficult to do. I have not done it, as this would require a good 20 pages of work, but it is definitely possible. I may come back to this, but for the moment my brain is switching to PDE/ODE territory, and this type of research is secondary.

Regards, James
Reply
#2
Hi, had literally zero time in the last three months to write or study, but I continue to read all the new exciting posts.
Said that, I think I have a hole in my memory but I remember something of your 2010/2011 work on semi operators, i.e. non-integer hos. One of those,  the cheta one, was originally a piecewise fix of Bennet's hyperoperations (aka distributive Hyperoperations) but extended to the real.
Using the superfunction of exp via beta method to build a continuous spectrum of homomorphic abelian groups extending Bennet's sequence seems really interesting.

But here you are doing something different.Since my mind is a bit foggy lately I can't follow it with ease. How exactly can you obtain the Goodstein equation while holding true for \(\omega\) an exp-fix-point the following equation \[x\,<s>\,\omega=\exp^s(\log^s(x)+\omega)?\quad\quad (*)\]

I know you can't write the details right now but even abstractly I don't see the intended direction of the implication.

Let \(\odot_s\) be the standard Bennet hyperoperations for a fixed base \(b\) and extended to the real ranks \(s\) using a tetration function. If we have a sequence of operators \(<s>\) satisfying the property \( (*) \) then why don't we have

\[
\begin{align}
x\,<s>\,\omega&=\exp^s(\log^s(x)+\omega)\\
&=\exp^s(\log^s(x)+\log^s(\omega))\\
&=x \odot_s \omega\\
\end{align}\]?

If the above is true how can \(<s>\) also satisfy the Goodstein equation, i.e. \(<s+1>S=<s><s+1>\)? Isn't \(<3>\) different from tetration?

Note: we have however a chain of holomorphic Goodstein equations. Observe that
\[
\begin{align}
x\odot_{s+1} ({}^{s}b\odot_s y)&=(x\odot_{s+1}{}^{s}b) \odot_s(x \odot_{s+1} y)\\
&=\exp^{s+1}(\log^{s+1}(x)+\log^{s+1}({}^{s}b)) \odot_s(x \odot_{s+1} y)\\
&=x \odot_s(x \odot_{s+1} y)\\
\end{align}\]

Since \({\bf 1}_{0}=0\); \({\bf 1}_{1}=1\); \({\bf 1}_{2}=b\); and \({\bf 1}_{s}=\exp_b^s(0)={}^{s-1}b\); we can define the homomorphic image of the successor as \({\rm succ}_{s+1}(y):={\bf 1}_{s+1}\odot_s y\) and write the previous as \[x\odot_{s+1} {\rm succ}_{s+1}(y)=x \odot_s(x \odot_{s+1} y)\]

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#3
(03/23/2022, 10:40 PM)MphLee Wrote: Hi, had literally zero time in the last three months to write or study, but I continue to read all the new exciting posts.
Said that, I think I have a hole in my memory but I remember something of your 2010/2011 work on semi operators, i.e. non-integer hos. One of those,  the cheta one, was originally a piecewise fix of Bennet's hyperoperations (aka distributive Hyperoperations) but extended to the real.
Using the superfunction of exp via beta method to build a continuous spectrum of homomorphic abelian groups extending Bennet's sequence seems really interesting.

But here you are doing something different.Since my mind is a bit foggy lately I can't follow it with ease. How exactly can you obtain the Goodstein equation while holding true for \(\omega\) an exp-fix-point the following equation \[x\,<s>\,\omega=\exp^s(\log^s(x)+\omega)?\quad\quad (*)\]

I know you can't write the details right now but even abstractly I don't see the intended direction of the implication.

Let \(\odot_s\) be the standard Bennet hyperoperations for a fixed base \(b\) and extended to the real ranks \(s\) using a tetration function. If we have a sequence of operators \(<s>\) satisfying the property \( (*) \) then why don't we have

\[
\begin{align}
x\,<s>\,\omega&=\exp^s(\log^s(x)+\omega)\\
&=\exp^s(\log^s(x)+\log^s(\omega))\\
&=x \odot_s \omega\\
\end{align}\]?

If the above is true how can \(<s>\) also satisfy the Goodstein equation, i.e. \(<s+1>S=<s><s+1>\)? Isn't \(<3>\) different from tetration?

Note: we have however a chain of holomorphic Goodstein equations. Observe that
\[
\begin{align}
x\odot_{s+1} ({}^{s}b\odot_s y)&=(x\odot_{s+1}{}^{s}b) \odot_s(x \odot_{s+1} y)\\
&=\exp^{s+1}(\log^{s+1}(x)+\log^{s+1}({}^{s}b)) \odot_s(x \odot_{s+1} y)\\
&=x \odot_s(x \odot_{s+1} y)\\
\end{align}\]

Since \({\bf 1}_{0}=0\); \({\bf 1}_{1}=1\); \({\bf 1}_{2}=b\); and \({\bf 1}_{s}=\exp_b^s(0)={}^{s-1}b\); we can define the homomorphic image of the successor as \({\rm succ}_{s+1}(y):={\bf 1}_{s+1}\odot_s y\) and write the previous as \[x\odot_{s+1} {\rm succ}_{s+1}(y)=x \odot_s(x \odot_{s+1} y)\]

Hey, Mphlee! Great to hear from you again.

I should've clarified, that first of all, this is just intended for \(0 \le \Re(s) \le 2\). This would not make Tetration, or the job of finding inbetween tetration in any meaningful way. This is why I don't even like this solution, But it is doable. It's essentially just run Bennet's commutative hyperoperations, but paste them together in a meaningful way to give a hyper-operator structure.

Essentially you would be pasting together a large swath of functions starting from the holomorphic functions:

\[
x \odot_{s,\mu} \omega(\mu) = F(x,s,\mu)\\
\]

This initially creates a holomorphic function for \(b=e^\mu\) in the Shell-Thron region. Then we describe \(x<s>\omega = F(x,s,\mu)\). We can delineate an equivalence class for the Goodstein functional equation so we have a bunch of functions that \(x<s>\omega \pm k\) must equal for \(1 \le \Re(s) \le 2\). Now we play the implicit function game...

For brevity's sake's, let's assume we can find where \(F(s+1) \in \mathcal{W}\) (the domain of fixed points). Then \(x <s> F\) is a valid operation--it can now be assigned the value \(x<s+1>\omega +1 = x<s> \omega'\).


Notice I haven't talked about \(\lambda\), or what could be equivalently done with a \(\theta\) mapping. The goal now is to make this whole mess holomorphic. We are going to essentially, continuously perturb the iteration of \(\log^{\circ s}\). As every type of iteration of \(\log\) works fine for \(0,1,2\), we don't break the initial Goodstein functional equation.

Again, I haven't worked out the details, but the final expression should look something like this (remember, \(b = e^\mu\)):

\[
x<s>\omega = \exp_b^{\circ s + \theta(s,x,\omega)}(\log_b^{\circ s + \theta(s,x,\omega)}(x) + \omega)
\]

Where \(\theta(0,x,y) = 0\) and \(\theta(s+1,x,y) = \theta(s,x,y)\). This expression is only viable for \(\omega\) a fixed point though, we then have to nest solutions for \(\omega+\pm k\), but and we are choosing \(\theta\) in just the right way so that "the ends line up," essentially.

We have so much god damn freedom moving \(\theta\), that this should be very doable. I can't fill in the cracks. Again, I'm switching gears from iteration theory; so this is just an info dump on what I've been mulling over lately.

Hopefully this makes sense, I'm kind of just shooting at the wall and seeing what sticks, lol. But I do believe this holds some weight. Think of it as taking all the Bennet hyperoperations, and finding a path along them where the typical Goodstein functional equation works.

Regards, James

As an important point, it's helpful to think about \(1 <s> \omega\) for \(s\approx 1\), and how this relationship plays out; when you try to add a theta mapping. Honestly, the implicit function theorem should take care of everything... I mean, we're just trying to find \(1<s-1> 1<s>\omega = 1<s> \omega+1\) which has a point of solution at \(s=1\), and we're just trying to find the geodesic which continues to satisfy this. On the real line this would constitute looking inbetween \(-e,e\), and checking we're glued together properly for \(-e,e-1\).
Reply
#4
It's hard to follow for me. It is my fault. I'm not even remotely familiar with perturbation methods and those theta mappings. Some points are really obscure: again my fault. Let's see if you can drop some candies for me.

Quote:I should've clarified, that first of all, this is just intended for \(0 \le \Re(s) \le 2\). This would not make Tetration, or the job of finding inbetween tetration in any meaningful way. This is why I don't even like this solution, But it is doable. It's essentially just run Bennet's commutative hyperoperations, but paste them together in a meaningful way to give a hyper-operator structure.

Ok lets start: how far this is from this

\[\begin{align}x<s>y&=x\odot_s y &&0\le \Re(s)\le 2\\
x<s+1>y+1&=x<s>(x<s+1>y)&&{\rm otherwise}\end{align},\\\]
modulo some perturbation business you use to force the Goodstein equation over that domain?

The following point is particularly obscure. We can say that \(x<s>\omega\) are a family of functions \({\mathbb C}/{\mathcal E}\times \mathcal{W}\to \mathbb C\), as the rank varies, where \(\mathcal{W}\) contains all the fixed points associated to \(\mu\) s.t. \(e^\mu\) is in the ST-region, i.e. if I remember well, when it's infinite tower converges (to the fixed point). You tell me to compute em by \(F(x,s,\mu)\), a function that we know how to compute using a tetration function with base \(b=e^\mu\).

Then what do you mean by

Quote:We can delineate an equivalence class for the Goodstein functional equation so we have a bunch of functions that \(x<s>\omega \pm k\) must equal for \(1 \le \Re(s) \le 2\). Now we play the implicit function game...

Also is \(F(s+1)\) intended to be \(F(x,s+1,\mu\)?

Quote:For brevity's sake's, let's assume we can find where \(F(s+1) \in \mathcal{W}\) (the domain of fixed points). Then \(x <s> F\) is a valid operation--it can now be assigned the value \(x<s+1>\omega +1 = x<s> \omega'\).

The starting point, if I'm following you, is to extend \({\mathbb C}/{\mathcal E}\times \mathcal{W}\to \mathbb C\) outside \(W\), somewhere in \(\mathcal{W}+\mathbb Z\), whenever \(F(x,s+1,\mu)\) still lands in \(\mathcal{W}\). But what if the new fixed point is associated with another ST-base? The holomorphic semi operators are expected to agree across all bases \(b\) only for rank 0 (addition) and rank 1 (multiplication) but to "ramify" for the other ranks. So I don't understand how all the pieces can fit.

I stop here because I don't have time to parse the theta mapping part atm.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#5
(03/24/2022, 11:13 AM)MphLee Wrote: It's hard to follow for me. It is my fault. I'm not even remotely familiar with perturbation methods and those theta mappings. Some points are really obscure: again my fault. Let's see if you can drop some candies for me.

Quote:I should've clarified, that first of all, this is just intended for \(0 \le \Re(s) \le 2\). This would not make Tetration, or the job of finding inbetween tetration in any meaningful way. This is why I don't even like this solution, But it is doable. It's essentially just run Bennet's commutative hyperoperations, but paste them together in a meaningful way to give a hyper-operator structure.

Ok lets start: how far this is from this

\[\begin{align}x<s>y&=x\odot_s y &&0\le \Re(s)\le 2\\
x<s+1>y+1&=x<s>(x<s+1>y)&&{\rm otherwise}\end{align},\\\]
modulo some perturbation business you use to force the Goodstein equation over that domain?

The following point is particularly obscure. We can say that \(x<s>\omega\) are a family of functions \({\mathbb C}/{\mathcal E}\times \mathcal{W}\to \mathbb C\), as the rank varies, where \(\mathcal{W}\) contains all the fixed points associated to \(\mu\) s.t. \(e^\mu\) is in the ST-region, i.e. if I remember well, when it's infinite tower converges (to the fixed point). You tell me to compute em by \(F(x,s,\mu)\), a function that we know how to compute using a tetration function with base \(b=e^\mu\).

Then what do you mean by

Quote:We can delineate an equivalence class for the Goodstein functional equation so we have a bunch of functions that \(x<s>\omega \pm k\) must equal for \(1 \le \Re(s) \le 2\). Now we play the implicit function game...

Also is \(F(s+1)\) intended to be \(F(x,s+1,\mu\)?

Quote:For brevity's sake's, let's assume we can find where \(F(s+1) \in \mathcal{W}\) (the domain of fixed points). Then \(x <s> F\) is a valid operation--it can now be assigned the value \(x<s+1>\omega +1 = x<s> \omega'\).

The starting point, if I'm following you, is to extend \({\mathbb C}/{\mathcal E}\times \mathcal{W}\to \mathbb C\) outside \(W\), somewhere in \(\mathcal{W}+\mathbb Z\), whenever \(F(x,s+1,\mu)\) still lands in \(\mathcal{W}\). But what if the new fixed point is associated with another ST-base? The holomorphic semi operators are expected to agree across all bases \(b\) only for rank 0 (addition) and rank 1 (multiplication) but to "ramify" for the other ranks. So I don't understand how all the pieces can fit.

I stop here because I don't have time to parse the theta mapping part atm.


Yes! You are right in all of your questions. Everything is exactly as you described. You're just forgetting:



\[

\log^{\circ s + \theta}(x) + \omega = \log^{\circ s}(x) + \theta_2 + \omega

\]



So as we move \(\omega\) we move \(\theta\); we can move \(\mu\) just as well here. And they'll satisfy a similar small change which can respect Goodstein.



By the part of equivalence classes, I meant, we can take all the values \(x<s+1>\omega+1\) could equal based on the fact it equals \(x <s> (x<s+1>\omega)\). Now we are going to try to glue the domains together for \(\mathcal{W}\) and \(\mathcal{W}-1\); so their intersection is holomorphic. Consider \(s \approx 1\), we are just checking that:



\[

\begin{align}

1<s-1>1<s>\omega = 1<s> \omega+1\\

1<0>1<1>\omega = \omega + 1\\

\end{align}

\]





 Essentially we check that this function can be holomorphic for \(\omega  \in \mathcal{W} \cap \left(\mathcal{W} - 1\right)\) for \(s \approx 1\). Once you can show that you can extend indefinitely in either direction up to a value \(c\) for \(|\Im(\omega)|\le c\).You do that in a neighborhood of \(s \approx 1\), you do it for \(s \approx,0,2\) as well. Then doing it in between would definitely be: hope for the best using your theta mappings Shy


I'm gonna stick for the moment with \(1<s> \omega\) for \(s\approx 1\). I think I could explain this better. Without going off the deepend which you'd have to do for \(x <s> y\); Lmao!
Reply
#6
Thank you for you explanations.
I gave it some time to sink but it's not enough to let me add something meaningful about it.
At the intuitive level I guess I'm close to understanding the idea, I'd be fully able to understand only If I'll manage to get the time to formalize set-theoretically your language (or even better, category-theoretically). Too bad I'm far from having the needed time atm.

I like this idea of the info dump threads. Just to spread some ideas in the case the situations of life makes impossible for us to polish or further pursue them. So there is a 1% chance they will not be lost forever.
Maybe I should do a couple of those fast info-dumps too because I can se clearly the possibility that I'll not be able to work on some ideas in the coming future, and maybe never again*, I hope not.

*I'm safe and in good health but lack of time and the upcoming times seem to be pretty dark economically and existentially here in the EU, and, I guess, In most of the comfty, sleepy, occidental world.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#7
Okay, Mphlee,

I'll go very slow. We are going to set, universally \(b = e^{\mu}\), and \(e^{\mu \omega} = \omega\) and \(0 < |\mu\omega|<1\)--we're going to always assume these relations, but it can get tricky. This is intended to mean that \(b\) is in the Shell-Thron region, and \(\omega\) is its fixed point. We are going to call the domain \(\mathcal{W}\) the domain of said fixed points. So, for example, \(\mathcal{W}\cap \mathbb{R} = (1/e,e)\) coinciding with when \(b \in (e^{-e},e^{1/e})\). I'm just going to investigate \(1<s> \omega\) for \(\omega, \omega + 1 \in \mathcal{W}\); just to set the scene.

This is going to be a long post, and I'll skip a few steps, but we're going to solve the problem in stages. Please be aware, that the actual mechanics I'd be using don't appear until the very end of this post.



This is the first step, which gets us a third of the way....



Let us define a modified version of Bennet's commutative hyper operators:

\[
x \oplus_{s,\theta,\mu} y = \exp_b^{s+\theta}\left(\log_b^{\circ s + \theta}(x) + \log_b^{\circ s + \theta}(y)\right)\\
\]

Here, \(\theta\) is a holomorphic 1-periodic function which satisfies \(\theta(0) = 0\). We know now that,

\[
x\oplus_{s,\theta,\mu} \omega = \exp_b^{s+\theta}\left(\log_b^{\circ s + \theta}(x) + \omega\right)\\
\]

Is a holomorphic function in \(s\), but also interpolates \(x+\omega,\,x\omega,\,x^{\omega}\) seamlessly. It does so regardless of \(\theta\), so long as \(\mu\) is associated with \(\omega\) in the manner above. We are going to let \(\mu\) move freely though.

Now, let us define the function:

\[
G(s,\theta,\mu) = \left(1\oplus_{s-1,\theta,\mu} \left(1\oplus_{s,\theta,\mu} \omega\right) \right) - 1 \oplus_{s,\theta,\mu(\omega+1)}(\omega+1)\\
\]

For \(s=1\) we get that \(G = 0 \), no matter what \(\theta\) or \(\mu\) do. We are trying to find an implicit function in \(\mu\) and \(\theta\), such that this result still holds locally. Implicit function theorem, literally does all the work.

\[
G(s,\theta(s,\omega),\mu) = 0\\
\]

Such that \(\theta(1,\omega) = 0\).

And voila, we've found a solution for \(|s-1| < \delta\), in which we can call:

\[
1 <s> \omega = 1 \oplus_{s,\theta(s,\omega),\mu(s,\omega)} \omega\\
\]

Which satisfies the Goodstein equation for \(\omega \in \mathcal{W}\cap\mathcal{W}-1\).


This absolutely gets us holomorphic in a neighborhood of multiplication/addition/exponentiation, for \(\omega \in \mathcal{W}\cap\mathcal{W}-1\). To extend further so that we get \(\mathcal{W} \pm k\), would require repeating the above argument for each iteration. The domains would be very sensitive here. This would be a bitch of a proof by induction. But should follow similarly.

This requires a bit more finesse than what I've written here, because we'd need to consider \(\theta\) as a parameter which is 1-periodic and zero at naturals, but locally this is not a problem; it's just \(\theta(1) = 0\) and we're only talking about \(|s-1| < \delta\); the period never pops up.



I can't imagine what it's like in Europe right now. You guys have my greatest sympathies. The worst we have here is increased food prices and ridiculous gas prices--bad inflation all around. But there's zero threat of a looming war; and for that my heart goes out to you. Heart is with you Mphlee. Although, historically if war breeds something; it breeds deep intellectual breakthroughs (especially in Europe); just get arrested like Weil for draft dodging and write your best work in prison Shy . Jokes aside, keep well and stay strong Mphlee.

Regards, James




This is the second step; paying attention to \(\mu\)...

I realize I've let a good amount of freedom with \(\mu\), but it's for good reason. We can instead write our function like this \(\mu: \omega \mapsto \mu(\omega)\). So instead we write our \(G\) like:

\[
G(s,\theta,\omega) = \left(1\oplus_{s-1,\theta,\mu(\omega)} \left(1\oplus_{s,\theta,\mu(\omega)} \omega\right) \right) - 1 \oplus_{s,\theta,\mu(\omega+1)}(\omega+1)\\
\]

This version is asking solely for a function \(\theta(s,\omega)\); and is a slightly clearer manner of writing what I wrote above. The above has a bit too many free variables. But in this case. All we are implicitly finding is a function:

\[
\theta(s,\omega) : \{s \in \mathbb{C}\,:\,|s-1| < \delta\}\times \left(\mathcal{W}\cap\mathcal{W}-1\right) \to \{\theta \in \mathbb{C}\,:\,|\theta| < \delta'\}\\
\]

Such that \(G(s,\theta(s,\omega),\omega) = 0\). Nothing more, nothing less. This is exactly what I wrote above, but it may be a tad difficult to follow without explicitly describing \(\mu\)'s dependence to \(\omega\). In short, we can think of moving \(\mu\) and moving \(\omega\) synonymously. I just moved \(\mu\) instead of \(\omega\) in the above.



The third step is setting up a similar equation which respects Goodstein's equation inductively...


It's also important to note, that this isn't the exact answer, this would just be proof that this method of thought can lead us to the answer. The correct function you would actually want is:

\[
G(s,\theta,\omega) = \left(1\oplus_{s-1,\theta,\mu\left(1\oplus_{s,\theta,\mu(\omega)} \omega\right)} \left(1\oplus_{s,\theta,\mu(\omega)} \omega\right) \right) - 1 \oplus_{s,\theta,\mu(\omega+1)}(\omega+1)\\
\]

Which has a second recursive call in \(\mu\), which assures us a greater deal of convenience when dealing with proving \(\mathcal{W} + \mathbb{Z}\). Nonetheless the mathematics is emboldened by what was written above; it's just not gonna be perfect. This \(G\) is the \(G\) you really want, which still satisfies:

\[
G(1,\theta,\omega) = 0\\
\]

So finding, \(\theta(s,\omega)\) as above which satisfy:

\[
G(s,\theta(s,\omega),\omega) = 0\\
\]

Is not far off at all.

This would mean, as I first wrote, that:

\[
1 < s> \omega = 1 \oplus_{s,\theta,\mu(\omega)} \omega\\
\]

And through the above implicit function theorem, for \(|s-1| < \delta\) and \(\omega \in \mathcal{W}\cap\mathcal{W}-1\):

\[
1<s-1>1<s>\omega = 1<s> \omega+1\\
\]

It satisfies this so long as \(1<s>\omega \in \mathcal{W}\) and \(\omega + 1 \in \mathcal{W}\). And the exact formula is the above implicit functions.



The ENTIRETY of this argument is based on the fact \(\log^{\circ s + \theta}(x) = \log^{\circ s}(x) + \theta'\), which allows us to move the perfect line between \(x + \omega, x\omega, x^{\omega}\), in an implicit manner to allow us to compare operations between \(\omega\) and another \(\omega'\).

I should also disclaim. I do not know how to extend this to \(0 \le \Re(s) \le 2\), I am just sure of \(|s-1| < \delta\) and by comparison \(|s|,|s-2| < \delta\). My argument to extend it further would involve fourier analysis, and I see how I might do it, but I'm not too sure yet...

The exact domain that this would produce holomorphy for \(1<s>\omega\) as we vary \(\omega\); would be for \(|\Im(s)| \le |c|\), where \(c = \sup \Im (\mathcal{W}\cap\mathcal{W}-1)\).  This means this would only work on a strip of the complex plane. The height of the strip would be exactly \(c = \Im(a)\) where \(a\) is on the border of the Shell Thron region and \(a+1\) is on the border of the Shell Thron region. This is a constant I cannot for the life of me identify with anything else. This would definitely be a new constant.

\[
\begin{align}
a &= \lim_{n\to\infty} \exp_b^{\circ n}(0)\\
|\log(b)a| &= 1\\
a+1 &= \lim_{n\to\infty} \exp_{d}^{\circ n}(0)\\
|\log(d)(a+1)| &= 1\\
c &= |\Im(a)| = |\Im(a+1)|\\
\end{align}
\]


The nested recursion allows us to talk about \(\mathcal{W}\pm k\) pretty clearly now. Literally, all we've done, Mphlee; is add theta mappings to Bennet's commutative operations; and we're playing fast and loose in a neighborhood of multiplication. To extend further would definitely be more difficult.

Nothing but love; I hope this makes a bit more sense...


To sum everything up, we can solve the equation:

\[
1<s> y\\
\]

When \(|\Im(y)| \le c\) and \(|s-1| < \delta = \delta(y)\).
Reply
#8
I also realize I haven't fully explained how to get \(|\Im(y)| < c\). This requires another trick. For \(s \approx 1\) we have that:



\[

1<s+1> y = 1 \oplus_{s+1,\theta,\mu} y \approx 1 \in \mathcal{W}\\

\]



This can be done for \(y \in \mathcal{W}\), but can be done outside of this domain, because \(1 \oplus_{s,\theta,\mu} y  \in \mathcal{W}\) for all \(\Im(y) < c\). This just relies on \(1^y = 1\) regardless of \(y\).This is tricky, because now we have to talk about the repelling fixed point a bit. Let's take \(\sqrt{2}\) for a second. So we have a function:



\[

1<s> 2\\

\]



For \(s\approx 0,1,2\). We can also obtain a value for \(1<s>4\) following the exact procedure as before. It's just:

\[
1<s> 4 = \exp_{\sqrt{2}}^{\circ s + \theta}\left(\log_{\sqrt{2}}^{\circ s + \theta}(1) + 4\right)\\
\]

Where here \(\theta\) is an implicit function, constructed in the same manner as above. This allows us to do \(1<s>y\) for \(y \in \mathbb{R}^+\), because \(y^{1/y}\) has a fixed point at \(y\) (attracting or repelling or neutral), but for \(b = y^{1/y}\), we still have the above implicit functions. To extend further we'd get a maximal domain, for \(y \in \mathcal{Y} \subset\mathbb{C}\). But there'd be a restriction that \(y,y+1 \in \mathcal{Y}\)--which for a quick value gives us \(|\Im(y)| < c\). In this manner, we can always assign:

\[
\begin{align}
\mu(y) &= \log(y)/y\\
b(y) &= e^{\mu(y)}\\
\exp(\mu(y)y) &= y\\
1<s> y &= \exp_{b(y)}^{\circ s+\theta}\left(\log_{b(y)}^{\circ s+ \theta}(1) + y\right)\\
\end{align}
\]

This works identically as above, but you have to be careful now. We are going to keep \(s\approx 1\), but now we are solving:

\[
\begin{align}
1<s> \left(1<s+1>y\right) &= 1<s+1> y+1\\
\text{In a neighborhood of:}&\\
1<1> \left(1<2>y\right) &= 1<2>y+1\\
1\cdot 1^y &= 1^{y+1} = 1\\
\end{align}
\]

Which is always solvable through the implicit function--gives us a \(\theta(s)\) for \(|s-1| < \delta\). This is a more versatile form, because at \(s=1\) we are just writing an idempotent relation with \(1\). This will give us \(1<s+1> y\), to get \(1<s>y\) we just use the usual Goodstein operations in \(y\), and we get it for \(s\approx 1\). We may have trouble with getting \(1<s-1> y\), but I'm not sure exactly.






..........................Now that I think about it, this result should work where-ever \(y \in \mathbb{C}/\{0\}\) and \(b = y^{1/y}\) has an attracting or neutral fixed point... I believe every \(y\) should have one, but I may be wrong. (double checked, yes this will work for \(y \in \mathbb{C}/0\) up to branch cuts). Essentially the domain would be:

\[
\mathcal{H} = \{y \in \mathbb{C}/\{0\}\,|\, b = e^{\log(y)/y},\,\lim_{n\to\infty}|\exp_b^{\circ n}(0)| < \infty\} = \mathbb{C}/(-\infty,0)\\
\]

(This turns out to be all of \(\mathbb{C}\)--upto a branch cut of course...)

It may be easier to switch to the real positive line at this point... We'd definitely be able to construct:

\[
1<s> y\,\,\text{for}\,\,|s|,\,|s-1|,\,|s-2|<\delta(y)\,\,\text{and}\,\,y\in\mathbb{R}^+\\
\]


Shit... I really think this does more than I thought it'd do. Jesus this definitely works even better. This info dump just info dumped on me, and I think I see it better.


I strongly encourage you to info dump Mphlee. It doesn't have to be perfect. Just stretch the legs and get a good feel for it, and release it...



I can absolutely prove this for the real positive line, now that I think about it. It's super easy.






HOLY FUCKING SHIT! THIS DOES \(\alpha <s> y\) for \(\alpha \in (1,e^{1/e})\) and \(y \in \mathbb{R}^+\)!!!! WE JUST HAVE TO RELATE TO THE FRACTIONAL CALCULUS!!!!!! OMG!!!!!!!!!!!!!!!!! Okay, I need to write a quick write up. This goes deeper than I thought. I always thought this was a half solution so I didn't give a shit. YES!!!
Reply
#9
Again, thank you for expanding and dumbing down. I'm sure it helps you to pin down the main idea but also helps me to follow.



Sadly I have not enough time to invest right now in order to provide a detailed review of this. If all of this works then we have a mystery: what is the true role of Bennett's family? What it has to do with Goodstein's family? Those are ancient questions... they are rooted on the role of exponentiation and its relation to addition and multiplication. For example once rewritten in group theoretic language, the process of solving superfunction equation, i.e. the engine that makes us able to climb the Goodstein's sequence, presents itself, paired with group conjugation, as analogous to the exp-log concept. That completes a weird analogy that drives my approach since 2012 and that originated in your old post about meta-superfunctions and that we can summarize as follows:



\[\begin{array}

FFunctions&& Numbers\\



f\in G&&z\in\mathbb C\\



group\,\,operation&& multiplication\\

iteration&&exponentiation\\

g^n&&a^n\\



[f,g]&&\log_a(b)\\



conjugation&&exponentation\\



f^g=gfg^{-1}&&a^b\\



Goodstein's\,\,seq.&&tetration\\



S^{H_{n+1}}=H_n&&b^{^nb}={}^{n+1}b\\



\end{array}\]





At first sight the analogy seem to be naive and not able to resist a deeper inspection. But I have good heuristic arguments, involving category theory, that suggest that there is something deep going on here. Also, if you keep in mind that when someone tries to extend the Goodstein equation to negative integers a possible set of solutions is related to the negative ranks of the Bennet hyperoperations, i.e. min-max operations, you can clearly see that there is something mysterious.



Note: Shouldn't the domain \(\mathcal H\) have \(0\) removed?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#10
(04/05/2022, 12:39 PM)MphLee Wrote: Note: Shouldn't the domain \(\mathcal H\) have \(0\) removed?


Yes you are absolutely right. The domain \(\mathcal{H}\) is essentially a domain of holomorphy of \(f(z) = z^{1/z}\) so it would exclude zero and some kind of branch cut to infinity. I definitely made a typo saying \(\mathcal{H} = \mathbb{C}/(-\infty,0)\), it should be \(\mathcal{H} = \mathbb{C}/(-\infty,0]\). You won't have holomorphy at \(0\), but you will have a continuous limit. I'm sticking to \(1<s> y\) at the moment, but this looks like it should at least work for \(\alpha<s> y\) for \(\alpha \approx 1\).



I'm currently trying to see if I can resurrect my old notes on Fourier analysis applied to semi-operators, so that we can take \(s \approx 0,1,2\) and turn it into \(0 \le \Re(s) \le 2\)--this is the real tricky part. But for the moment I have \(1<s> y\) for \(y \in \mathcal{H}\) and \(s \approx 0,1,2\). I'm trying to think of a feasible way of constructing \(\theta\) at the moment. I know it exists, thanks to the implicit function theorem, but I have no idea how to get it. I'll need to think of a limit formula, but I'm scratching my head about this.



Getting to \(\alpha \in (1,e^{1/e})\) shouldn't be hard once you have it for \(1\)...



I'm definitely going to try writing this up; but this is going to be a long project. Probably the summer...



I think the key relation we are using from Bennet, is that:



\[

\begin{align}

\exp_b^{\circ s}(\log_b^{\circ s}(1) + y + \delta) &= \exp_b^{\circ s+\epsilon}(\log_b^{\circ s+\epsilon}(1) + y)\\

&= \exp_{b+\kappa}^{\circ s}(\log_{b+\kappa}^{\circ s}(1) + y)\\

\end{align}

\]



This allows us to locally change the base of \(\exp,\log\), without affecting their values at \(s=0,1,2\), and affect the value of \(y\) (at least locally). Now we're just looking for a geodesic (lol, a path through all the values) which manages to solve Goodstein.



Call \(F(z) =\log(z)/z\), and \(Y = 1 \oplus_{s,\theta,F(y)} y\), then:



\[

\begin{align}

G(s,\theta) = 1\oplus_{s-1,\theta,F(Y)} Y - 1 \oplus_{s,\theta,F(y+1)} y+1\\

G(s,\theta(s)) = 0\\

\end{align}

\]



We would construct \(\theta\) like this for \(s\approx 2\) and \(s-1 \approx 1\). Then, we want to find the value of \(\theta\) that satisfies this as we move \(s\) (beyond a local scenario, this is the tricky part), since it's satisfied for \(s=0,1,2\) prima facie. Then our solution becomes:



\[

1 <s> y = 1\oplus_{s,\theta(s),F(y)} y\\

\]



Now we have to talk about the types of iterations we use to construct \(\theta\). Thank god for the beta method thesis. We can construct functions \(\theta\)'s such that \(\theta(s+1) = \theta(s)\) and \(\theta(s+2\pi i/\lambda) = \theta(s) - 2\pi i/\lambda\).



This means you should think of \(\exp_b^{\circ s + \theta}(1)\) as a different tetration that \(\exp_b^{\circ s}(1)\), they are essentially both tetration... The difference will be they have different periods. That is all.  So we can think of moving \(\theta\), as really just stretching and shrinking the period of our Bennet hyperoperation. That is, we can instead of talking about \(\theta\), just talk about a value \(\Re \lambda > 0\)...



That's the craziest part about all this.  This result draws a lot on using the \(\beta\) method; where we can get \(\exp_b^{\circ s}\) for arbitrary period in \(s\), and still retain large areas of holomorphy. This is to say, quite literally, think of the \(\theta\) in \(\oplus_{s,\theta,\mu}\) as choosing "what tetration we use". I don't mean what base of tetration, that's handled by \(b = e^\mu\). I mean, what tetration with base \(b = e^{\mu}\) we are using. And we can uniquely identify and compare these things based solely on one parameter, \(\lambda\); which forms the period \(2 \pi i / \lambda\).

These form very special \(\theta\) functions, such that:

\[
\begin{align}
\theta(s+1) &= \theta(s)\\
\theta(s+2\pi i / \lambda) &= \theta(s) - 2\pi i / \lambda\\
\end{align}
\]

We can classify these functions very well; the beta method produces the "best" possible class essentially.

So instead of writing, \(\theta\), in all of these posts, we can instead talk about \(\lambda\); which is implicitly choosing which tetration is the right tetration to turn bennet to goodstein.

EDIT:

I'm waiting for your info dump Mphlee. It doesn't have to be perfect. Just infodump. Who cares. It's all love here. I can't believe I was about to just post this one thread and drop everything and never touch it again. But after your replies it started making more and more sense, and now I defs have \(1<s>y\)...
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 2,252 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 21,257 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 4,632 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 8,695 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 11,764 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 14,976 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 4,953 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 100,874 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 41,771 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 61,985 04/01/2015, 06:09 PM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)