The modified Bennet Operators, and their Abel functions
#1
As I've begun to reframe the discussion around semi-operators, I've chosen to choose the language of Abel functions, which allows us to reduce a \(3\) variable equation into \(2\). But it becomes a bit of a headache, and I've hit a kind of Stop sign, telling me: I need a creative leap to get passed this hurtle.



So, to begin, we can reintroduce everything. We start by writing the modified bennet operators:



\[

x[s]y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\

\]



Here there are a few things to take note. I am restricting \(\Re(y) > 1\), and I am restricting \(x > e\). The variable \(s\) is also restricted to belong in \([0,2]\). Each of these iterations are about the repelling iteration, as well. So if \(y = 2\) then \(\exp_{\sqrt{2}}^{\circ s}(u)\) is the Schroder iteration about the fixed point \(4\). This additionally means, if we were to take \(y = e\), then we are not performing the normal \(\eta\) tetration, we are performing the repelling iteration, which is described as \(\exp_\eta^{\circ s}(u)\) for \(u > e\).



This function is always analytic, and has the benefit of satisfying:



\[

\begin{align}

x[0]y &= x+y\\

x[1]y &= x\cdot y\\

x[2]y &= x^y\\

\end{align}

\]



Additionally this function nearly satisfies the Goodstein equation:



\[

x[s]\left(x[s+1] y\right) \approx x[s+1] (y+1)\\

\]



We then enter into introducing a \(\varphi\) parameter. This parameter is intended to correct the modified bennet operators to satisfy the Goodstein equation.



\[

x[s]_\varphi y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\

\]



Previous attempts at solving for the function \(\varphi(s,x,y)\) include using the implicit function, where the surface of values:



\[

F(\varphi_1,\varphi_2,\varphi_3) = x[s]_{\varphi_1}\left(x[s+1]_{\varphi_2} y\right) - x[s+1]_{\varphi_3} (y+1) = 0\\

\]



Have a very planar structure, and that shows that we at least have a rough existence of the solution.



I have switched gears now, instead of finding an implicit curve on this evolving surface, to solving an Abel equation hidden in here. We start off then, by defining the inverse function to these operators. These can always be found because \(x[s]y\) has monotone growth (easily checked by observing non-zero derivative, which is easily checked on the real line). To begin, we're going to fix \(x\), it doesn't really move in the Goodstein equation, so it can be considered as an initial point, and is therefore largely irrelevant, just so long as \(x > e\).



So let:



\[

\alpha(s,x[s]y) = y = x[s]\alpha(s,y)\\

\]



And to let things get a little difficult, let's let:



\[

\alpha_{\varphi}(s,x[s]_\varphi y ) = y = x[s]_\varphi\alpha_\varphi(s,y)\\

\]



And now our problem becomes very different. We are now asking for a function \(\varphi\) such that:



\[

\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\

\]



And here is where things become untenable. This is precisely where everything starts to break down. But momentarily, follow me. If we find the solution to this, then we call \(\varphi(s,y)\) the various values which allow this solution. This function by construction, will now satisfy:



\[

\varphi(s,x[s]_{\varphi(s,y)} y) = \varphi(s,y)\\

\]



Which means it is idempotent. This is largely the goal from here, to construct an idempotent function. Because...



\[

x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + 1\right)\\

\]



And from here the orbits are satisfied:



\[

x[s]_{\varphi(s,y)} x[s]_{\varphi(s,y)} ...\text{n times}...x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + n\right)

\]



And now we can affirm that these operators satisfy the Goodstein equation.



The trouble?



This formula diverges, or rather, has no solution for \(s > 0.2\). Everything works fine for the interval \(s \in [0,0.2]\), but about here everything begins to diverge. Initially, I thought it was a problem with my code, but no, this problem is something a bit more intrinsic in this manner of solution. Namely:





\[

\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\

\]





Has no solutions for \(s > 0.2\).








Thus enters the more difficult problem... which is probably where I should've started earlier. But, I'm here now.



\[

\alpha_{\varphi_2}(s+1,x[s]_{\varphi_1} y)= \alpha_{\varphi_2}(s+1,y) + 1\\

\]



This describes a line in \(\mathbb{R}\), and there are always solutions for it, though it is required to let \(y\) grow. The reason being that:



\[

\alpha(s+1,x[s]y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\

\]



For all \(\epsilon > 0\). This essentially means that the modified Bennet operators are so close to satisfying Goodstein's equation, that the Abel equation is satisfied upto about \(O(\log(y))\). And thereby, moving \(\varphi_1\) and \(\varphi_2\) around, always ensures there is at least a point which the above equation is satisfied. In fact, there's a closed form for it, I won't write it because it's ugly as hell, but it's always solvable using the log rules:



\[

\begin{align}

\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &= \log_{y^{1/y}}^{\circ s}(x) + y + \varphi\\

\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &= \log_{y^{1/y}}^{\circ s}(x)y e^{\varphi}\\

\end{align}

\]





So then our problem becomes something new, we can solve for the function \(\varphi_1\) as a function of \(\varphi_2\) such that:



\[

x [s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s,y) + 1\right)\\

\]



But in order for this to work, we still need \(\varphi_2\) to be idempotent. This should follow naturally because it satisfies the Abel equation, by which we have the orbits:



\[

x[s]_{\varphi_1} x[s]_{\varphi_1} ...\text{n times}...x[s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s+1,y) + n\right)

\]



The trouble now?



How to make sure this is a well defined operator. As a function solution it works, but I'm not sure if it constructs an operator. Analycity isn't a problem. But we, essentially, need to satisfy the following equation:



\[

\begin{align}

\varphi(s,y) = \varphi_1(s,y)\,\,\text{for}\,\,s\in[0,1]

\varphi(s,y) = \text{some new formula I can't wrap my head around, for}\,\,s \in [1,2]\\
\end{align}

\]



I've sort of confuddled myself into a circle here. And I'm mostly just writing this out to see if something obvious sticks out to me. But still, this has become endlessly frustrating...




Nonetheless! We're getting closer by the minute to unlocking:

\[
x \langle s \rangle y = x[s]_{\varphi(s,x,y)} y\\
\]

Such that:

\[
x \langle s \rangle \left(x \langle s+1 \rangle y\right) = x \langle s+1 \rangle (y+1)\\
\]




Here is a graph of \(3[1.5]y\) over a pretty large domain, about \(20 > \Re(y) > 0\) and \(|\Im(y)| < 10\), the artifacts are code artifacts, because I haven't found a way to let my code pass the Shell-Thron boundary smoothly



   





Here is a graph of \(3 [1.9] y\) where you can see it almost has the periodic structure of \(3^y\):



   



I'll update with a plot of \(\alpha(1.9,y)\) which is almost logarithmic, I'm making a large complex plot, but the code is certainly suboptimal so that'll probably take all night, lol.

Here is the real plot!

   

I'll post the complex when it compiles.
Reply
#2
I feel very stupid... I'm breaking my brain into it... I'm realizing that I have some weak point in truly understanding that implicit function thing... I believed it was 80% clear to me but it's not.

Let \(f_\phi(s,y):=x[s]_\phi y\) for some fixed suitable \(x\). Starting from the assumption of the existence of a function \(\varphi(s,y)=\phi\) that satisfies
\[\alpha_{\phi}(s+1,f_\phi(s,y))= \alpha_\phi(s+1,y) + 1\]
how you derive, step-by-step, the f.eq.
\[\varphi(s,f_{\varphi(s,y)}(s,y)) = \varphi(s,y)\\\]
Pls, can you explain it very slowly, step by step, is if you were an algorithm, or as if I am an high school student that is learning how to solve 2nd degree polynomial equations.

Because if I start from the assumption I derive the following
\[\alpha_{\varphi(s+1,f_{\varphi(s,y)}(s,y))}(s+1,f_{\varphi(s,y)}(s,y) )=\alpha_{\varphi(s+1,y)}(s+1,y)+1\]
but from here I'm lost...

About finding creative ways... I'm not sure I can help you but when you said you wanted to switch to the Abel presentation... I was thinking you would consider what I can inverse goodstein equation, something that give rise not to an hyperoperations sequence but to an hyperlogarithms sequence.
\[a_0\circ a_{s+1}=a_{s+1}\circ a_s\]
Assume the seed is decrementation by one, assume \(\alpha_{\varphi,s}\) is the bi-indexed family of abel functions you defined.

Why don't we just ask for parameters \(\varphi_i\) for \(i=0,1,2\) that solve
\[S^{-1}\circ \alpha_{\varphi_0,s+1}=\alpha_{\varphi_1,s+1}\circ \alpha_{\varphi_2,s}\]
i.e. a function \(\varphi(s)\) that solves
\[\forall y.\,\,  \alpha_{\varphi(s+1),s+1}(y)-1=\alpha_{\varphi(s+1),s+1}( \alpha_{\varphi(s),s}(y))\]
this will take care of all the \(y\) at the same time... even if probably this means that such triples can not exist, i.e. such triples exist locally only for some \(y\in U\), but we should prove this.

This probably speaks for my ignorance of implicit functions... so let's stick to your story of the parameter depending on two variables... why don't you consider the fully abel-like presentation?
\[\alpha_{\varphi_0}(s+1,y)-1=\alpha_{\varphi_1}(s+1, \alpha_{\varphi_2}(s,y))\]

I feel that to go deep inside this matter we need to develop further the theory of goodstein maps... we need to compare and manipulate them as we do with superfunctions... maybe like introducing the analogous of theta mapping but for measuring the difference between solutions to goodstein f.eq.

Your discussion about normal families should be part of it but I'm slow and I still have to absorb the idea.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#3
Question 
What would happen if you used the analytic continuation of the Kneser method to do that?
For a fixed n greater than one, then using the analytic continuation of the Kneser method how fst would n[1.5]x grow?
Would it grow half-exponentially?
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Reply
#4
(07/19/2022, 09:48 AM)MphLee Wrote: I feel very stupid... I'm breaking my brain into it... I'm realizing that I have some weak point in truly understanding that implicit function thing... I believed it was 80% clear to me but it's not.

Let \(f_\phi(s,y):=x[s]_\phi y\) for some fixed suitable \(x\). Starting from the assumption of the existence of a function \(\varphi(s,y)=\phi\) that satisfies
\[\alpha_{\phi}(s+1,f_\phi(s,y))= \alpha_\phi(s+1,y) + 1\]
how you derive, step-by-step, the f.eq.
\[\varphi(s,f_{\varphi(s,y)}(s,y)) = \varphi(s,y)\\\]
Pls, can you explain it very slowly, step by step, is if you were an algorithm, or as if I am an high school student that is learning how to solve 2nd degree polynomial equations.

Because if I start from the assumption I derive the following
\[\alpha_{\varphi(s+1,f_{\varphi(s,y)}(s,y))}(s+1,f_{\varphi(s,y)}(s,y) )=\alpha_{\varphi(s+1,y)}(s+1,y)+1\]
but from here I'm lost...


Hey, so I haven't written exactly how to derive this yet--I have a sketch of a proof--but numerical evidence is confirming it. But I'll explain first with the case that \(\varphi_2 = \varphi_1\) in the abel equation. Then call \(\varphi\) this \(\varphi\).

Essentially, you have to prove this with a few tricks from implicit function theory, I'll try to explain as best I can. Write it as this: fix the value \(\varphi_0 = \varphi(s,y)\) such that

\[
\alpha_{\varphi_0}(s+1, x[s]_{\varphi_0}y) = \alpha_{\varphi_0}(s+1,y) + 1
\]

Now we know that:

\[
\alpha_{\varphi_0}(s+1,x[s]_{\varphi_0}y) + 1 = \alpha_{\varphi_0}(s+1,y) + 2
\]

Additionally, we know there is a solution \(\varphi_0'\) to the equation:

\[
\alpha_{\varphi'_0}(s+1,x[s]_{\varphi'_0} x[s]_{\varphi'_0} y) = \alpha_{\varphi'_0}(s+1,x[s]_{\varphi'_0} y) + 1\\
\]

And there is a solution \(\varphi''_0\) such that:

\[
\alpha_{\varphi''_0}(s+1,x[s]_{\varphi''_0} x[s]_{\varphi''_0} y) =\alpha_{\varphi''_0}(s+1,y) + 2\\
\]

Then to prove the identity you're asking about, we have to show that \(\varphi_0 = \varphi'_0 = \varphi''_0\).

This is the tricky part I haven't ironed out yet, but it appears to be happening with code. But I'll sketch the proof I have in mind. Let \(s \to 0\), then all these values can be shown to be the same function. This is essentially because there exists one unique solution to the equation:

\[
\alpha_{\varphi_0}(1,x[s]_{\varphi_0} y ) = \alpha_{\varphi_0}(1,y) + 1\\
\]

And this unique solution, also satisfies the equations of \(\varphi'_0\) and \(\varphi''_0\) at the value \(s=0\). But additionally satisfies this equation in the complex derivative (which is enough to define analycity and again, uniqueness). EDIT: I mean the complex derivative in either \(\varphi\) is equivalent at \(s=0\). So since \(\varphi(0,y) = 0\) and \(\lim_{s\to 0}\frac{d}{d\varphi} \alpha_\varphi\) is the same for each of these solutions, by analytic continuation, they must be the same solutions. I 'd have to justify this further, but that's like 70-80% of a proof, lol. It would just be uniqueness of a differential equation though, nothing too complicated. Too lazy to work through the details, and since this method fails at about \(s= 0.2\), I'm not going to bother.


This is still a little sketchy as a proof, but this line of reasoning should work, as the implicit solution at \(s=0\) is non-degenerate, and hence unique, and since this happens at \(s=0\) where all of these equations are equivalent.

This then gives us the formula:

\[
\alpha_{\varphi_0}\left(s+1,x[s]_{\varphi_0} x[s]_{\varphi_0}y\right) = \alpha_{\varphi_0}(s+1,y) + 2\\
\]

This means that \(\varphi(s,y) = \varphi_0 = \varphi'_0 = \varphi(s,x[s]_{\varphi_0} y)\).

This is still a little shaky though. And I'm probably not going to pursue this argument further because it becomes a truly local solution that fails about \(s = 0.2\), so it's only good for about zero.


Quote:About finding creative ways... I'm not sure I can help you but when you said you wanted to switch to the Abel presentation... I was thinking you would consider what I can inverse goodstein equation, something that give rise not to an hyperoperations sequence but to an hyperlogarithms sequence.
\[a_0\circ a_{s+1}=a_{s+1}\circ a_s\]
Assume the seed is decrementation by one, assume \(\alpha_{\varphi,s}\) is the bi-indexed family of abel functions you defined.

Why don't we just ask for parameters \(\varphi_i\) for \(i=0,1,2\) that solve
\[S^{-1}\circ \alpha_{\varphi_0,s+1}=\alpha_{\varphi_1,s+1}\circ \alpha_{\varphi_2,s}\]
i.e. a function \(\varphi(s)\) that solves
\[\forall y.\,\,  \alpha_{\varphi(s+1),s+1}(y)-1=\alpha_{\varphi(s+1),s+1}( \alpha_{\varphi(s),s}(y))\]
this will take care of all the \(y\) at the same time... even if probably this means that such triples can not exist, i.e. such triples exist locally only for some \(y\in U\), but we should prove this.

This probably speaks for my ignorance of implicit functions... so let's stick to your story of the parameter depending on two variables... why don't you consider the fully abel-like presentation?
\[\alpha_{\varphi_0}(s+1,y)-1=\alpha_{\varphi_1}(s+1, \alpha_{\varphi_2}(s,y))\]

Hmmm, that's interesting.

The reason I don't want to use the full Abel, is because it reduces into exactly the same problem as I've considered before, with a triple of \(\varphi\)'s which relate to each other in some codified way. The reason I like the abel presentation, is that for a single \(\varphi\) it does seem to be working near \(s = 0 \), even if it is failing past \(s = 0.2\). And then, I like the case with \(\varphi_1,\varphi_2\) because it reduces from 3 variables instead of one. And incidentally it's reducing the equation, because it's asking for a periodic function in \(y\). This is a little tricky, and may very well not work, but I think it could.

This means we would have:

\[
x [s]_{\varphi_1} \left(x [s+1]_{\varphi_2} y\right) = x [s+1]_{\varphi_2} (y+1)\\
\]

This implies that \(\varphi_2(y+1) = \varphi_2(y)\). This shouldn't cause a contradiction and greatly simplifies the problem. Then \(\varphi\) would be \(1\)-periodic, which has no obvious contradictions. Additionally, when reduced into the Abel form, it looks precisely like an equation of the form:

\[
x [s]_{\varphi_1} y = f(s+1,f^{-1}(s+1,y) + 1)\\
\]

Returning to the three variable case, would mean we are taking:

\[
x [s]_{\varphi_1} y = g(s+1,f^{-1}(s+1,y) + 1)\\
\]

Which is much less symmetric.

Honestly I jumped the gun thinking I could easily reduce 3  variables into one; so I'm going to try 3 variables into two; if that fails, then I'll go back to 3 variables. I get frustrated because the implicit solution is existing, but no obvious limit formula except for large \(y\), where you should be able to make a kind of iterated log argument.

Quote:
I feel that to go deep inside this matter we need to develop further the theory of goodstein maps... we need to compare and manipulate them as we do with superfunctions... maybe like introducing the analogous of theta mapping but for measuring the difference between solutions to goodstein f.eq.

Your discussion about normal families should be part of it but I'm slow and I still have to absorb the idea.


Normal families are going to play a huge role. The only theorem I've essentially proven hard core to do with this, is that:

\[
\alpha(s+1,x[s] y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\
\]

So if you define a neighborhood of functions of \(x[s]y\), call them \(x[s]_{\varphi}y\) (which exists) in which:

\[
\alpha_\varphi(s+1,x[s]_{\varphi} y) - \alpha_\varphi(s+1,y) - 1 = o(y^{\epsilon})\\
\]

Then these can be turned into a normal family! It's a little difficult to explain, and I'm still fuzzy on the details. I've been mostly running numerical evidence lately, and trying to make graphs, etc. etc.. so I haven't gone back to doing the hard math yet. Largely because I know I'm missing something crucial, but I don't know what yet...

But If we think of \(\varphi(y+1) = \varphi(y)\), then it is very much related to \(\theta\) mappings. \(\varphi\) was intended as the semi-operators version of \(\theta\) mappings, lol--but it's similar to the \(\tau\) error from the \(\beta\) method. Any solution to semi-operators must be a \(\varphi\)-mapping. The question is, how the fuck do you find the right \(\varphi\) mapping, lmao.
Reply
#5
(07/19/2022, 10:05 AM)Catullus Wrote: What would happen if you used the analytic continuation of the Kneser method to do that?
For a fixed n greater than one, then using the analytic continuation of the Kneser method how fst would n[1.5]x grow?
Would it grow half-exponentially?

Hmm, so this is a weird question.

First of all, you can't use Kneser to construct this, as Kneser is a tetration solution, this is an iterated exponential--outside of the purview of Kneser. This construction is specifically about the repelling iteration, and not the attracting iteration, nor is it based off of any tetration per se, just the iterated exponential. Additionally it is within the Shell-Thron region, and so, the Kneser fixed point pair algorithm wouldn't make sense for the repelling case. I don't think you'd even be able to run a Kneser like iteration about the repelling point, but I could be mistaken.

Either way, to answer your question ignoring the Kneser part; it kind of grows half exponentially, kinda doesn't. It would look something like this:

\[
n [1.5] x  = \exp^{\circ 0.5}_{1+\log(x)/x+O(\log(x)/x)^2}\left( A x\right)
\]

For some constant \(A > n\), but not much bigger than \(n\).


So this means it grows half exponentially, but not technically, because the base of the half exponential \(b \to 1\).


If I took:

\[
\text{slog}\left(n[1.5]x\right) \approx \text{slog}(x) + C(x)\\
\]

And \(C(x)\) would be about a half maybe, as \(x \to \infty\), it would be around there. It would probably be a tad chaotic. So ya, I guess. Close enough to half exponential. Then, I guess technically you could write a function:

\[
\text{tet}_K(\text{slog}(x) + C(x)) = n[1.5]x\\
\]

Which would be a Kneser form of this function; but in and of itself; the construction would be tedious with Kneser. Best I could say is \(C(x) \approx 1/2\) as \(x \to \infty\). So again, about a half exponential.

Regards, James
Reply
#6
I'm sorry man, the more I look into this and the more I convinced that I'm not understanding a shit because the more I'm convinced this thing about \(\varphi\) can't work for algebraic reson (so very mechanical reasons)... but I can't exactly point out where... and the only remarks that come to my mind are basically underestimating your algebraic manipulation skill... and I know them to be superior to mine.. so I'm trapped.
I guess I'll go back to your original thread and pdf where you introduce the surface \(\Phi\).

I'll get back soon.



(07/20/2022, 09:46 PM)JmsNxn Wrote: Hey, so I haven't written exactly how to derive this yet--I have a sketch of a proof--but numerical evidence is confirming it.

I (re)discovered this is actually theorem 1.8.3 of you paper on "Analytically Interpolating Addition,
Multiplication, and Exponentiation". Even if the proof is totally obscure to me...I hope that the one you are giving me now will make more sense after a couple of times I read it.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#7
Ya, it's a rough proof, I'm not perfectly confident in it either. But it cannot really be derived algebraically, it was to be derived like an initial value problem. It's alright to have doubts on it, I do too. But also, it appears to be unnecessary to the grand scheme. I mean, what are you to do when you run a calculator and they agree to 9 digits at 9 digits precision. Either they're deceptively close and the proof is wrong, or it just happens.

It does make sense that it would happen though, because it implies that:

\[
\alpha_{\varphi_0}\\
\]

Is a true abel function of \(x [s]_{\varphi_0} y\), it's not too crazy if you think of it like that. But it is still difficult, I agree.

Just know, I too am not satisfied with the proof, it's just a sketch of why it should happen, based on an initial value problem.

I think the key to visualizing it is tricky, which is why looking at the 1-periodic case is much more acceptable.

In the three variable case \({\bf\varphi} = (\varphi_1,\varphi_2,\varphi_3)\), we already can derive that \(\varphi_3(y-1) = \varphi_2(y)\). So to ask that it is instead \(1\)-periodic isn't much of a stretch, and there are solutions on the surface \(\Phi\) that satisfy this. The question would be, does this make a well defined operator though, or is this just solving the abel equation, without really being a true operator \((e,\infty) \times (e,\infty) \to (e,\infty)\).

So many questions, so many headaches, lmao.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 1,935 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 19,968 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 7,687 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 11,263 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Holomorphic semi operators, using the beta method JmsNxn 71 38,395 06/13/2022, 08:33 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 14,502 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 4,796 01/17/2017, 05:10 AM
Last Post: JmsNxn
  the inverse ackerman functions JmsNxn 3 13,464 09/18/2016, 11:02 AM
Last Post: Xorter
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 97,880 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 40,526 08/22/2016, 12:19 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)