Holomorphic semi operators, using the beta method
#61
Ok, I'm glad I was clear enough and you were able to master the idea so quickly... an idea which is pretty simple at the end. It's just the usual limit trick but on a higher level with all of its pros and cons.

1) Now the problem I see coming: \(f(y,s)\) seems not really easily invertible... also you note the domain of \(f^{-1}\). Rightly so. In my toy model we are working in groups, everything is invertible and we need not to worry about domains. But in this case I feel that this becomes pretty fat, as a problem. I experimented alot back in the days in iterating the sub-function operator.  Obviously the more you iterate it on something non-surjective, the more the domain shrinks...

1.5)
I haven't tried to invert you construction nor computing the algebra, but inverting in the \(y\) sounds like we need Lambert function \(W\) somewhere. Also... I tried to apply the limit formula to the, easier, Bennet's family... and I totally don't see what should look polynomial... I still can't see nothing behaving as a polynomial. Polynomial in the \(n\)? I'm truly lost xD

2) The second point... I have a fear when you see it converges quickly... when iterating the operator \(\Sigma_s(f)=fsf^{-1}\) we have that \(s\) is a fixed point, everyone  here is aware of that, also Tommy made some comments on it as a carrier of potential problems. It is a fixed point even when \(s\) is not chosen to be the successor. In fact... we do not know how much it is "attracting".

Some functions quickly converge (an I mean after few iterations) on a shrinking-but-stabilizing domain to the seed function \(s\). I believe that the least number of iteration that sends the resulting function \(\Sigma^n_s(f)\) locally into a small neighborhood of \(s\) -I'm thinking here informally as some kind of pointwise convergence over a function space- should be called local-rank. It means that locally a function looks like the \(n\)-th solution to the equation \(xs=hx\) where \(h\) is an \((n-1)\)-'th solution.
To sum it up: the operation of "shifting and sub-functioning" does indeed have as its fixed points solutions to the goodstein equation. Maybe locally if we extend to non-bijective functions. The problem is that also \((s,s,s,s,....)\) is a solution, where \(s\) is the seed of the family. How much strongly it attracts arbitrary non-goodstein sequences?


I ask that because I tried on desmos to compute the limit of
\[\lim (\Sigma_S \circ S^*)^n({\bf bennet})=\lim (\Sigma_S \circ S^*)^n(\oplus)\]
I know, it's not what you are using, but it seems to to solidly converge to the successor after just \(3\) iterations. With solidly I mean that I added the slider for the base \(b\) in \(b\oplus_s x= \exp^n (\ln^n b +\ln^n x)\). And after the third iteration, the slider does not perturb the shape of the graphs of \( \left(  (\Sigma_S \circ S^*)^n (\oplus) \right)_3\) anymore and the function lies in the strip \(x+1+\rho\) for \(x>N\) and \(\rho \in [-\epsilon,\epsilon]\).

So... how strong is the basing of attraction of the trivial goodstein sequence? How to escape it? I know, my corollary was meant to prove that if we have convergence of a non constant sequence of operations, then the limit was going to be a non-constant solution. Tbh I need to check again the argument... I believe it is true in the theory over groups... but outside it... idk.



A VISUAL EXPLAINATION OF THE METHOD

If someone finds hard to visualize what is happening, here the picture: we start with a continuous family of binary operations, eg. Bennett's or James's variant \(\exp^s_{y^{1/y}}(\ln^s_{y^{1/y}}(x)+y )\).
[Image: f1.png]
remember that it is a continuous one this time
[Image: f2.png]
Given an operator \(b* y\) we can ask for its super-operator \(*^+\) and it's sub operator \(*^-\). Call \(*^+\) the solution of the equation \(b*^+(y+1)=b*(b*^+ y)\) and call \(*^-\) the solution of \(b*(y+1)=b*^-(b*y)\). Assume we can iterate this procedure we obtain a new family \(*^0=*\), \(*^{n+1}=(*^n)^+\) and \(*^{n-1}=(*^n)^-\).
[Image: f3.png]
Here the vertical links are the superfunction/subfunction relations. Let's apply this to our initial continuous family, e.g. Bennet's one
[Image: f4.png]
We obtain a grid of operations.  Traveling along the columns we have "iterate"/"take subfunction" process.
[Image: f5.png]
But traveling the horizontal direction is NOT taking the "Bennett process" \(b*y \mapsto e^{\ln b * \ln y}\)  if we are not moving along the the 0-th row. This means that the grid is not commutative (horizontal moves don't commute with vertical moves).  Every column is a discrete sequence that satisfies the Goodstein equation, but only the continuous family at the 0th row satisfies bennet.
[Image: f6.png]
The grid itself seems something very interesting on its own, the non-commutativity implies that is can be extended in and infinite number of direction. A first study of this kind was initiated in december 2020/may 2021 by Jaramillo (Hyperoperations in exponential fields). Giving names to the nodes of the grid makes it look like this
[Image: f7.png]
Now we try to visualize the limit formula. We want to find a continuous family of binary operations \(g_{s}(b,y)\) s.t. \[g_{s+1}(b,y+1)=g_{s}(b,g_{s+1}(b,y))\]

We want to obtain that as a fixed point of a family \(\lim_{n\to \infty}g_{n,s}(b,y)=g_s(b,y)\). To do that we need a first approximation \(g_{0,s}(b,y)\).
We want \[g_{n,s+1}(b,y+1)=g_{n+1,s}(b, g_{n,s+1}(b,y))\]
Clearly \(g_{n+1,s}(b,y)\) is the subfunction of \(g_{n,s+1}(b,y)\) \[\boxed{g_{n+1,s}=g_{n,s+1}^-}\]

If we set \(g_{0,s}=\oplus_s^0\), i.e. as the first approximation we use Bennet, we get \(g_{1,s}=\oplus^{-1}_{s+1}\), \(g_{2,s}=\oplus^{-2}_{s+2}\) and \(g_{n,s}=\oplus^{-n}_{s+n}\)
[Image: f8.png]
As we can see we are iterating, shifting and subfunction. We expect the limit to be a fixed point, hence a continuous solution of the goodstein equation.
[Image: f9.png]
\[\boxed{g_{s}=\lim g_{n,s}=\lim_{n\to\infty}(\Sigma_S\circ S^*)^n(\oplus)_s=\lim \oplus_{s+n}^{-n}}\]
[Image: f10.png]

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#62
(05/29/2022, 06:11 AM)JmsNxn Wrote: Now, we can continue this operation:
\[
x\,[s]_n\,y = x\,[s+1]_{n-1}\,\left( (x\,[s+1]_{n-1}^{-1}\,y) +1\right)\\
\]
Up to this point everything seems right. using the inversion in the monoid of binary operations under left bracketing your formula translates as \([s]_{n+1}=[s+1]_nS[s+1]_n^{-1}\)

Quote:The really weird part now, is that this solution doesn't work on its own. You have to solve for \([s+1]_n\) while you solve for \([s]_n\). The thing is... we can solve for:
\[
\begin{align}
x\,[s]_{n}\,y &= f(y)\\
x\,[s+1]_n\,y &= f^{\circ y}(q)\\
\end{align}
\]

You can actually do this pretty fucking fast... It just looks like iterating a linear function.

EDIT:ACk! made a small mistake here, this is an idempotent iteration like this, the actual iteration is a little more difficult, I'll write it up when I can make sense of controlling the convergence of this...
This last formula look suspicious. Maybe it is related to your small mistake.

We should have something like  \(x\,[s]_{n+1}\, y=f(y)\) and \(x\,[s+1]_{n}\, y=f^y(q)\)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#63
(05/31/2022, 02:32 PM)MphLee Wrote:
(05/29/2022, 06:11 AM)JmsNxn Wrote: Now, we can continue this operation:
\[
x\,[s]_n\,y = x\,[s+1]_{n-1}\,\left( (x\,[s+1]_{n-1}^{-1}\,y) +1\right)\\
\]
Up to this point everything seems right. using the inversion in the monoid of binary operations under left bracketing your formula translates as \([s]_{n+1}=[s+1]_nS[s+1]_n^{-1}\)

Quote:The really weird part now, is that this solution doesn't work on its own. You have to solve for \([s+1]_n\) while you solve for \([s]_n\). The thing is... we can solve for:
\[
\begin{align}
x\,[s]_{n}\,y &= f(y)\\
x\,[s+1]_n\,y &= f^{\circ y}(q)\\
\end{align}
\]

You can actually do this pretty fucking fast... It just looks like iterating a linear function.

EDIT:ACk! made a small mistake here, this is an idempotent iteration like this, the actual iteration is a little more difficult, I'll write it up when I can make sense of controlling the convergence of this...
This last formula look suspicious. Maybe it is related to your small mistake.

We should have something like  \(x\,[s]_{n+1}\, y=f(y)\) and \(x\,[s+1]_{n}\, y=f^y(q)\)

Yes, I screwed up here, lol. I got it now though. I want to wait a bit before I describe the algorithm I have in mind. I want it all working well.


And! The reason this won't converge to the seed, is because at \(s=0\) we still have addition, and at \(s=1\) we still have multiplication. The algorithm I have in mind will fix the endpoints as such.

This can be seen even in the first iteration:


\[
\begin{align*}
x[0]_1y &= x+y\\
x[1]_1 y &= x\cdot y\\
\end{align*}
\]

These remain constant in the iteration I have planned. I'm trying to combine this with the varphi method currently, and so far things are slow, but this seems like a much better method.

Essentially, my argument it doesn't converge to the seed, is because a solution exists! It's close enough to bennet, that I'm confident the modified bennet's will converge. The trouble is doing it accurately, so far I've only managed to get a couple digits, but it does look like it's working.


As to inverting \(x [s]^{-1} y\), this took a while to code, it's slow as hell but it does work. I think a formal solution would be unheard of, and would probably converge slower than the method I have. Yes it would probably use W-Lambert a couple times nested, it'd be hopeless to do formally. I just wrote a quick polynomial inversion of the Taylor series of Ibennet(s,z,y+q), which inverts in q and then set q=0 which gives the inverse at \(y\). Luckily my code is polynomially based, so grabbing taylor expansions is ezpz.

I know what you mean by the domain shrinking, and that making sure you can take the inverse is tricky.  What works, is that:

\[
x[s]^{-1}y
\]

is invertible so long as \(y > Y(x,s)\) for \(0 \le s \le 1\). We don't get this quite as well for \(1 \le s \le 2\). But, the thing is, the inversion algorithm will converge (which is further proof that this method will extend beyond \(y > e\), the algorithm is converging on a larger domain than it should).

For example, you can calculate:

\[
3\,[1.5]^{-1}\,10 = 2.5418865714965084351384601699161614103869528768610\\
\]

Which is clearly nonsensical using our current formulas (\(x[s]y\) isn't defined for \(y<e\)), but we only need this \(+1\) which is \(>e\). So if I run:

\[
3[0.5]_1 10 = x[1.5]3.5418865714965084351384601699161614103869528768610 = 17.654344068794716937147222216314762100132579882903\\
\]

Where:

\[
3 [0.5] 10 = 16.841884762465586893000032633581733619306022613397
\]

So they are pretty close to each other, and they vary by less than 1. So the iteration is doing something. After a few more iterations it calms down. This is strong evidence in my mind, that this object will continue to converge for \(y > 0\) and that the restriction \(y>e\) is a little arbitrary, but necessary for the time being. As I'm confident that we can probably say that:

\[
3[1.5]2.5418865714965084351384601699161614103869528768610 = 10\\
\]

Think of it, as the inverse being defined on a larger domain than the actual function, and using that to analytically continue the function.




Now I'm mostly interested in very large \(y\), because this is where you can estimate the behaviour of \(\boldsymbol{\varphi} = (\varphi_1,\varphi_2,\varphi_3)\) very well. And for \(0 \le s \le 1\) this inverse function always works. Since it's analytic, it still spits out values as \(1 \le s \le 2\), despite not being a "true" inverse, because it leaves the domain (\(y>e\))... which you noted. It is really finicky for \(1\le s \le 2\), and to make it not finicky you have to run a Newtonian root finding algorithm, which is annoying.

But everything is still creating a viable inverse function, that is defined on a larger domain than then the original bennet operation.

Code:
InvBennet(0.5,3,100)
%266 = 81.879845404599395355269425245459181009934218442993

Ibennet(0.5,3,%266)
%267 = 100.00000000000000000000100997700361890445814464017


This is just one example. It gets troublesome \(1 \le s \le 2\) though, and you need larger values/iterations for it to be manageable. Also, I have written in a flag for these values, which adds a Newtonian root finder. The initial algorithm only kind of gets us close, and even then it can screw up (code is still a little wonky, I'll have to find a more efficient way in the future), but

Code:
InvBennet(1.5,3,20,,1)
%53 = 3.8013626638813936726849674551236741743250837414234

Ibennet(1.5,3,%53)
%54 = 20.000000000000000906030985117114149927827111941439


Now you can try the first step of your iteration:

Code:
Ibennet(1.5,3,InvBennet(1.5,3,20,,1)+1)
%55 = 30.682945689981177590653956610929950299856487473611

Ibennet(0.5,3,20)
%56 = 29.332503986494964228494669746491051063337196366026

You can see that these things start to change slightly, I'm betting it will converge very well. It's going to be tricky though, and I definitely have to bridge the gap between using \(\varphi\) and also this iterative formula; I think the answer lies somewhere inbetween both of these ideas.

I don't want to spoil too much. I have a lot of code that is still very mix and match, and wonky. I have to streamline everything, and also implement taylor series, which aren't present in the trial run of InvBennet. It turns out to be stupid hard to implement Newtonian root finders in two variables, when it really shouldn't. So I'll have to find a work around to that.

I'll try to keep you guys posted, but I kind of want to stay silent for a bit until I have a better grasp of a working model with working code.

Regards, James


Also, I'd like to add that you definitely deserve credit for this, Mphlee. When you see the algorithm I have planned, it will make much more sense. And this is absolutely fucking fascinating. And it uses your idea. Iterating bennet's, definitely reduces to \(S : y \mapsto y+1\), because because it has a nice decay to it. Iterating the modified Bennet will probably not quite work how you'd like. Iterating modified bennet, while you massage it with \(\varphi\) absolutely will work. I'm going to tread lightly now though, until I have a strong working model.
Reply
#64
is x [s] f(x) = x 

a degree of freedom ?

can we freely choose f(x) and work from there ??

regards

tommy1729
Reply
#65
(06/11/2022, 12:27 PM)tommy1729 Wrote: is x [s] f(x) = x 

a degree of freedom ?

can we freely choose f(x) and work from there ??

regards

tommy1729

It is a degree of freedom yes, but I would write it using \(\varphi\). So that:

\[
x\,[s]_{\varphi} f(x) = x\\
\]

Then there is some \(\varphi\) where this statement is true (given that \(f\) is reasonably well behaved).
Reply
#66
(06/12/2022, 12:07 AM)JmsNxn Wrote:
(06/11/2022, 12:27 PM)tommy1729 Wrote: is x [s] f(x) = x 

a degree of freedom ?

can we freely choose f(x) and work from there ??

regards

tommy1729

It is a degree of freedom yes, but I would write it using \(\varphi\). So that:

\[
x\,[s]_{\varphi} f(x) = x\\
\]

Then there is some \(\varphi\) where this statement is true (given that \(f\) is reasonably well behaved).

proof ??

I mean such that x [s] y is analytic in x , y , s to be clear.

and yes f(x) analytic ofcourse.

regards

tommy1729
Reply
#67
(06/12/2022, 12:09 AM)tommy1729 Wrote:
(06/12/2022, 12:07 AM)JmsNxn Wrote:
(06/11/2022, 12:27 PM)tommy1729 Wrote: is x [s] f(x) = x 

a degree of freedom ?

can we freely choose f(x) and work from there ??

regards

tommy1729

It is a degree of freedom yes, but I would write it using \(\varphi\). So that:

\[
x\,[s]_{\varphi} f(x) = x\\
\]

Then there is some \(\varphi\) where this statement is true (given that \(f\) is reasonably well behaved).

proof ??

I mean such that x [s] y is analytic in x , y , s to be clear.

and yes f(x) analytic ofcourse.

regards

tommy1729

Sure no problem:

\[
x = x[s]_{\varphi} f(x) = \exp^{\circ s}_{f(x)^{1/f(x)}}\left(\log^{\circ s}_{f(x)^{1/f(x)}}(x) + f(x) + \varphi\right)\\
\]

Take \(\log^{\circ s}_{f(x)^{1/f(x)}}\) on both sides, and cancel out the \(\log^{\circ s}_{f(x)^{1/f(x)}}(x)\): so that:

\[
f(x) + \varphi = 0\\
\]

And then writing

\[
\varphi(x) = -f(x)\\
\]

Gives you your answer.

So by construction:

\[
x [s]_{-f(x)} f(x) = x\\
\]

Not sure how that helps with anything, or what you are going to use it for. This also doesn't really help because you'd be asking for \(f(x) > e\), and I doubt that would really happen. There's no value \(f(x) > e\) such that \(x \langle s\rangle f(x) = x\). By nature of this looks like \(x + f(x) > x + e\) or in between \(x \cdot f(x) > x \cdot e\) which obviously can't equal \(x\). So these values would be beyond our purview.
Reply
#68
A fast formula for the inverse of \(x[s]y\)!

I thought this would be infeasible, but I think I've found a better way of approximating the inverse of the modified Bennet equations. This only works for the case between multiplication and exponentiation, and I think it's pretty cool that this formula works. I thought it'd be a little dumb at first. But, here we are:

Let's write:

\[
y=x [s+1] \omega(y) = \exp^{\circ s}_{\omega(y)^{1/\omega(y)}}\left(\log^{\circ s}_{\omega(y)^{1/\omega(y)}}(x)\cdot \omega(y)\right)\\
\]

Rearranging, we are given the fixed point equation:

\[
\omega = \frac{\log^{\circ s}_{\omega^{1/\omega}}(y)}{\log^{\circ s}_{\omega^{1/\omega}}(x)}\\
\]

Therefore, if you can guess a value \(\omega_0\) sufficiently close to \(\omega\). And we define:

\[
f(\omega) = \frac{\log^{\circ s}_{\omega^{1/\omega}}(y)}{\log^{\circ s}_{\omega^{1/\omega}}(x)}\\
\]

Then \(f^{\circ n}(\omega_0) \to \omega\)!

So far, I've found that \(\omega_0 = y\) works perfectly!!!!!!!!!


YES!

This greatly simplifies looking for inverses. OMFG YES! This has been bugging me for a while. Thought I'd post!

Now onto better studying the object:

\[
x [s+1] (x[s+1]^{-1}y) + 1 \approx x[s] y\\
\]
Reply
#69
Dear James your 2 last posts are inspirational but I cannot accept them at the moment - maybe later -.

Here is why : it is not shown to be analytic in x,s,y and at the same time satisfy the superfunction property where x <s+1> y is in a way the super of x <s> y.

I am not convinced that they can be united with x <s> f(x) = x for some analytic f(x) and that that f(x) is almost free to choose.
 ( the integers have to match with the defintions of x<0>y and x<1>y etc ofcourse * unless you let those be free to choose as well )

Not trying to be annoying  Big Grin

regards

tommy1729
Reply
#70
(06/12/2022, 01:50 PM)tommy1729 Wrote: Dear James your 2 last posts are inspirational but I cannot accept them at the moment - maybe later -.

Here is why : it is not shown to be analytic in x,s,y and at the same time satisfy the superfunction property where x <s+1> y is in a way the super of x <s> y.

I am not convinced that they can be united with x <s> f(x) = x for some analytic f(x) and that that f(x) is almost free to choose.
 ( the integers have to match with the defintions of x<0>y and x<1>y etc ofcourse * unless you let those be free to choose as well )

Not trying to be annoying  Big Grin

regards

tommy1729


You have misinterpreted the notation.

There is no solution \(x <s> f(x) = x\)... At least, not in the purview of this solution. By construction I'm assuming \(x > e\) and \(y > e\), and there's no value \(x <s> y = x\) for these values.

You asked if there is a value \(\varphi\) such that \(x [s]_{\varphi} y = x\). Which there is. This is just:

\[
x[s]_{\varphi} y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\
\]

By which the answer is \(\varphi = - y\) for the equation you asked to solve. This would not happen when we solve for the actual semi-operator, as there is no solution...

I think you're mixing things up. Remember that \(x [s] y\) is absolutely analytic in all variables. It just equals \(\exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\)--which is absolutely analytic in all variables... The question is whether \(x <s> y\) can be found in a neighborhood of \(x [s] y\) and be analytic. I understand I haven't shown this yet. But you're mixing things up.

I know this thread is a mess, and very disorganized. I plan on writing up much more fluidly the observations, but I'm waiting until I have something concrete.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 975 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 15,367 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 2,152 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 4,016 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 9,426 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 12,481 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 4,238 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 86,191 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 35,101 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 50,686 04/01/2015, 06:09 PM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)