The \(\varphi\) method of semi operators, the first half of my research
#1
I'm going to give a rundown of my theory so far. It is pointing to a solution, but I do not have the solution yet. I've shared some of my observations, but I haven't explained them all. I was going to attach a pdf, which is essentially the working part of my theory--but I thought I'd hold off until I can make this more concrete. It's just, I'd estimate half of the work necessary for this construction. But this seems very promising.

We're trying to show, in no words less, that the modified Bennet operators, can be corrected to be holomorphic semi-operators.

To be clear, we give a dictionary of our variables:

\[
\begin{align}
x &> e\\
y & >e\\
&\exp^{\circ s}_{y^{1/y}}(w) \,\,\text{is the repelling Schroder iteration about the repelling fixed point}\,\,y\,\,\text{valid for}\,\,w>e\\
x[s]y &= \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
x\langle s\rangle_{\varphi}y &= \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\
\end{align}
\]

Where the ultimate goal is to describe the implicit function \(\varphi(s,x,y)\) such that:

\[
x \langle s \rangle y = x\langle s\rangle_{\varphi}y\\
\]

And this operator satisfies the Goodstein equations:

\[
x \langle s \rangle \left(x \langle s+1\rangle y\right) = x \langle s+1\rangle (y+1)\\
\]


Obviously this is a very difficult problem. I have talked to Mphlee through PMs a bit about this, and he has helped tremendously. But I'm still not there. But what I am very close to is a rough draft of a solution. I have many ways of describing what the solution should look like, how \(\varphi\) looks. From this I've created a few observations crucial to the study.

There's an abel identity integral to this solution.

Let's write

\[
f(s,y) = x \langle s+1\rangle_\varphi^{-1}y\\
\]

Which is the inverse of \(x \langle s+1\rangle_\varphi y\) in \(y\). The original modified bennet operators, satisfy:

\[
f(s,x[s]y) = f(s,y) + 1 + o(y^{\epsilon}) \,\,\text{for all}\,\,\epsilon>0\\
\]

The error grows logarithmically, if you are curious. The exact solution is given as:

\[
f(s,x\langle s\rangle y) = f(s,y) + 1\\
\]

For some value \(f\) and some exact function \(\varphi\).


MphLee's family is normal.

There's a very specific family of function I defined off of Mphlee's comments and descriptions. In many ways, it's a family of functions \(x \langle s \rangle_\varphi y\) which are in the neighborhood of solving the above equations. This reduces into functions \(g(s,y) = x \langle s \rangle_\varphi y\), such that they satisfy the crucial identity:

\[
g^{-1}(s+1,g(s,y)) = g^{-1}(s+1,y) + 1 + o(y^\epsilon)\\
\]

While additionally interpolating addition, multiplication and exponentiation. The most central result I've been able to show, is that the modified Bennet operators \(f(s,y) = x[s]y\) satisfy this identity. And by proxy, elements which are in a neighborhood satisfy this identity. I've come to call this Mphlee's family. As it relates greatly to his study of rank operators, and the semi-operators from his lens (at least functorially).

This family is normal if you set \(y>Y\), which means it is locally bounded everywhere. And here is where things kick in to highgear. Every sequence of functions in MphLee's family has a converging subsequence. It is compact.


From here we introduce the dark element. The solution exists somewhere in here. We just need to find a working iteration within this theory that works.... Here it is.

We are going to make a change of variables, so that:

\[
x \langle s+1 \rangle_\varphi \alpha_\varphi(s,y) = y\\
\]

And we are going to look at the value \(\varphi\) such that:

\[
\alpha_\varphi(s,x\langle s\rangle_\varphi y) - \alpha_\varphi(s,y) - 1 = 0\\
\]

This value \(\varphi\) always exists. From this, we know that:

\[
\alpha_\varphi\\
\]

Is the Abel function of \(x \langle s \rangle_\varphi y\). What this tells us next, is the very very very interesting part. Recalling that \(\varphi\) moves with \(s\) and \(y\). We can safely conclude that:

\[
\varphi(s,x\langle s \rangle_\varphi y) = \varphi(s,y)\\
\]

In which, it is idempotent. This means, if I write out our expressions in more detail:

\[
\begin{align}
A(s,y) &= \alpha_{\varphi(s,y)}(s,y)\\
x \langle s \rangle y &= x \langle s \rangle_{\varphi(s,y)} y\\
\end{align}
\]

Then:

\[
\begin{align}
A(s,x \langle s \rangle y) = A(s,y) + 1\\
A(0,y) = \frac{y}{x}\\
A(1,y) = \frac{\log(y)}{\log(x)}\\
x \langle 0 \rangle y = x+y\\
x \langle 1 \rangle y = x \cdot y\\
\end{align}
\]

These also satisfy Goodstein's equation...

I am not going to release the code or the paper yet, this is just a rough run through. But it provides the solution to our problem, full stop. It is analytic at the boundary values \(s=0,1,2\), and converges relatively fast. I am grueling away at trying to prove all of this rigorously now. But the code is converging!!!!! Albeit, much slower than I hoped Sad. This is definitely because I am using a Newtonian root finder to find the value of \(\varphi\), which is cheating, I know. But it confirms that such a \(\varphi\) exists.

My code so far is producing correct values, but it does need a good amount of tweaking. I haven't been able to run any graphs yet, but I will in the future. The code is just too slow at the moment for it to be practical, so I have to fix my code somehow to more efficiently produce \(\varphi\). We shall see soon!!!!!!!!!!

I just thought I'd give a progress update. I should have a working draft of the paper and the code in a month's time (hopefully not later). This is absolutely fascinating.

Let the old thread die, and keep this thread as the center of discussion. I have answers to any and all questions and I'd be happy to explain more. This does get a little funky and complicated. I know I'm not there yet, I just have some strong numerical evidence, and a rough explanation of the theory. I hope to remedy this by mid July. We shall see!!

Regards, James.




I thought I'd add that, we are solving a change of variables in \(\varphi_1,\varphi_2,\varphi_3\) such that, all we need to do is solve one variable. Which, boils into finding a constant \(\varphi\) such that:

\[
x \langle s \rangle_\varphi y = x \langle s+1\rangle_{\varphi} \left(\alpha_\varphi(s,y) + 1\right)\\
\]

Where the solution for this equation satisfies \(\varphi(s,x \langle s \rangle_\varphi y) = \varphi(s,y)\)... I might not have made this clear. This means we are solving an inherently iterative procedure. \(\varphi\) becomes the fixed point of an iteration.
Reply
#2
Tongue Tongue Tongue
Can't wait to get into this.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#3
I should have a rough draft pdf up soon, I'm currently making some graphs. The code is super slow, and still glitchy as fuck. I have to change some protocols from what they currently are. I'm doing recursion within recursion while we iterate a recursion within a recursion. Ffs, this is becoming nonsensical. But it's a better way than using Newton's root finder, which was the alternative way, lol.


This program is the slowest god damned thing. But here is a graph of \(\varphi\) done over \(90 \le y \le 100\) and \(0 \le s \le 0.2\) while \(x=3\). The foreground is as we increase\(s\). The lateral motion is increasing \(y\), where you see the stranger growth. The function looks a tad angular, numbers work out though. And it is analytic at these small angular jumps you see.

   

This is the exact value of \(\varphi\).
Reply
#4
This is proving to be much more difficult than I thought. But I see the solution. I have attached here a run down of the case \(0 \le s \le 1\) and \(x,y > e\), where we are trying to find the appropriate \(\varphi\) function to solve goodstein. Everything I have prepared in this paper is solid and shown 100%. But it does not produce semi-operators. It solely solves it in limited scenarios. But, this is the solution, and the framework of the proof I want to present. I am almost there for the real line.



The trouble is, I have to introduce complex variables. I have predominantly used real values in this draft. And it holds me back from finding the correct answer. I have working code for a good amount of values, and a lot of garbage code for other values. But a shape of how these objects are looking is starting to appear. I apologize for the rough shape of this paper. I am sending it to everyone here rough, because it is very preliminary. But It uses much of your ideas, even if you don't see how.



This is solely the real line case that I have presented so far. I've come to realize we need to look at the domain \(\Re(y)> e, |\Im(y)| < \pi\). This is just to provide a more efficient proof of existence. It also let's us explain how my code is working. My code fails for a large body of values, but I can explain how it is failing, but I need to use complex dynamics. By this, we can shape out how to correct the error and make the code work and the math sound.



.pdf   Analytically_interpolating_addition__multiplication_and_exponentiation_FIRST_PART.pdf (Size: 328.2 KB / Downloads: 303)



Any way, just read this to get an update on what I'm getting at! It's still a rough draft! I have absolutely made mistakes! I need to make better references! I need to flesh out some of the proofs! I just thought I'd explain the methodology.
Reply
#5
(06/24/2022, 10:29 AM)JmsNxn Wrote:  Analytically_interpolating_addition__multiplication_and_exponentiation_FIRST_PART.pdf

Hi James -

  very nice to read already. Thank you very much!

Gottfried

p.s. just a typo: I sometimes read "Trapmann" instead of "Trappmann" (and I promise I didn't want to make a stupid joke out of it Wink )
Gottfried Helms, Kassel
Reply
#6
(06/24/2022, 12:06 PM)Gottfried Wrote:
(06/24/2022, 10:29 AM)JmsNxn Wrote:  Analytically_interpolating_addition__multiplication_and_exponentiation_FIRST_PART.pdf

Hi James -

  very nice to read already. Thank you very much!

Gottfried

p.s. just a typo: I sometimes read "Trapmann" instead of "Trappmann" (and I promise I didn't want to make a stupid joke out of it Wink )
I know with the Trappmann, I always forget, is it two 'n's, or two 'p's or both. But I do know it's both. It's just weird for me to see double doubles like that, lol. Especially when I'm typing a whole paragraph and haven't edited yet.

This is still just a rough draft for the moment. I imagine it's a third of the way; so there should be about 30 more pages to this. As I'm progressing I'm realizing that the proof must exist in the complex plane. And that avoiding the complex plane isn't doing me any favours. So this is essentially what I'd call the first act. Which introduces the characters, sets up the conflict. And does it in such a manner that it's easier to visualize. If I just jumped right into the complex analysis, it may be a tad confusing. Especially because there are up to 4 variables in these equations I'm trying to solve...

As I kind of hit a milestone with this description of the problem, I thought I'd post what I had so far. Which isn't much, but I think it is setting the scene for a solution.

Thanks for the support, Gottfried.
Reply
#7
Forgive me for minimizing doubting or being ignorant and skeptical 

Nothing personal 

but

So you have a function of 3 variables.

If you hold one variable constant you get a function of 2 variables.

It feels like you are doing that. 

I do not see the point.

H(X,s,y) = f(s,y) + F1(s,y) X + f2(s,y) x^2 + …

H(0,s,y) = f(s,y)

How is that helping ?

regards

tommy1729
Reply
#8
(06/28/2022, 03:11 PM)tommy1729 Wrote: Forgive me for minimizing doubting or being ignorant and skeptical 

Nothing personal 

but

So you have a function of 3 variables.

If you hold one variable constant you get a function of 2 variables.

It feels like you are doing that. 

I do not see the point.

H(X,s,y) = f(s,y) + F1(s,y) X + f2(s,y) x^2 + …

H(0,s,y) = f(s,y)

How is that helping ?

regards

tommy1729

OH, tommy!

No problem, I see your confusion.

That's sort of what I am doing, but it's a bit more nuanced.

We have the function:

\[
f(s,x,y) = x [s] y\\
\]

And we want to correct it so that it satisfies goodstein's equation to give us the proper solution:

\[
x \langle s \rangle y\\
\]

I should've been clearer, but without loss of generality, we can drop \(x\) from the discussion. The proof will follow the exact same for all \(x > e\). So if you can do it for \(x=3\) let's say, then the algorithm (should) converge for \(x>e\) just as well.

It's sort of like how \(b \uparrow \uparrow z\) for \(b > \eta\) is constructed using Kneser, but it's the exact same construction as when \(b=e\). So without loss of generality we can ignore the value \(b\) (within the process of construction, so to speak). So when I focus on \(f(s,y)\) rather than \(f(s,x,y)\), I'm mostly just saying we don't really care about \(x\). Set it to any value greater than \(e\) and the process works the same. That's all I meant by that. It's not a salient variable. It kinda just hangs out there.

This might bite me in the ass when we try to prove analycity in \(x\) but I don't think so...





If you are referring to \(\varphi_1,\varphi_2,\varphi_3\), and reducing that to two variables, that's a bit different. Which is essentially you can always write one as an expression of the other two. So it suffices to only refer to a solution pair \(\varphi_1,\varphi_2\). That doesn't align with your functional equation though, at least as I see it. If this is more what your question is asking, could you specify it better?


Hope that answers your question.


I started making graphs in the complex plane. Here is a graph of \(3[1.5] y\) and \(3[1.9] y\) done over \(3 \le \Re y \le 19\) and \(-8 \le \Im(y) \le 8\):

   

   


You can see it slowly turning periodic. This form of the problem looks like:

\[
x [s] y\,\,\text{for}\,\,x>e,\,0 \le s \le 2,\,|\log(y) |> 1,\,\Re(y)>0\\
\]

Which, the repelling iteration for the modified bennet operators always converges.

Edit: just wrote the domain wrong, forgot that I assumed \(\Re(y) > 0\) in my construction.
Reply
#9
So, I've shifted my research into calculating:

\[
x [s] y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]

On a more maximal domain. By this we approach errors at around \(|\log(y)|=1\), but otherwise the functions behave cleanly.

This is a large domain in \(y\), where the errors, white outs and glitches, are precisely near \(|\log(y)|=1\). This goes hand in hand with the manner I have programmed these constructions. So many of the errors near here are human error on the manner of code, and not a mathematical error. Many more detailed tests have taught me that \(x [s] y\) is holomorphic for \(\Re(y) > 1\) when \(x > e\) and \(0 \le s \le 2\).

To exemplify this, I will use some graphs, they are not proof--just descriptions of the way it's working.

   

This is a graph over \(\Re(y) > 0\), done largely.

Zooming in on that error which represents the values \(|\log(y)| = 1\) we get a closer picture. Here \(\Re(y)>1\) and the white of this graph is error code. It does not represent the analytic function. It represents an error on my part within the coding.

   


By this I want to say that \(x[s]y\) is holomorphic on much larger domains than originally thought.
Reply
#10
Those are suggestive images. Idk if If I'm saying something idiot... but they looks like the white circle is an artifact that can be removed to show the smooth and nice surface underneath...

Too bad I Wanted to be strong on programming...

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 1,934 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 19,968 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 3,948 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 11,263 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Holomorphic semi operators, using the beta method JmsNxn 71 38,395 06/13/2022, 08:33 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 14,502 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 4,796 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 97,880 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 40,526 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 59,817 04/01/2015, 06:09 PM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)