Arguments for the beta method not being Kneser's method
#1
ALL OF MY CODE IS DRAWN FROM MY GITHUB REPOSITORY FOR THE BETA METHOD
ALL OF SHELDON'S CODE IS DRAWN FROM fatou.gp

https://github.com/JmsNxn92/Recursive_Tetration_PARI

https://math.eretrandre.org/tetrationfor...p?tid=1017



So I thought I'd compile a list of arguments I have, that the beta method tetration is not Kneser's solution. At this point I am more than confident the beta method definitely produces a holomorphic tetration. The code is still suboptimal; I'm busy trying to make a Matrix add on to the existing code to create a grid of Taylor series of sample points which will give good enough precision for large imaginary arguments.

To begin, we initiate fatou.gp and Abel_L.gp

I have coded my superexponential at base e with \( \text{Sexp(z,n,{v=0})} \); which will work on the real line precisely; and can grab the Taylor series on the real-line precisely. It will glitch for values like \( 1 + 0.5*I \) but once the imaginary argument is greater than \( 1.1*I \) it works fine. For complex arguments near the real line; expand a Taylor series on the real-line and sum it; which is in the code dump. The variable n is the depth of iteration; for this we'll set it to 100; which is large enough for about 100 digit accuracy. The variable v, is a flag as to whether you want to pull out Taylor series or not. The explanation is in the comments of the code.

Sheldon has coded the kneser superexponential at base e with first the initialization \( \text{sexpinit(\exp(1))} \) and then \( \text{sexp(z)} \).

The first thing we'll add is a graph comparing these two solutions on the real line. This is the result of \( \text{ploth(X=-1,2,Sexp(X,100) - sexp(X))} \).

   


You can see immediately that Kneser's solution is going to grow a tad bit faster than the beta solution. And that they still agree on the natural numbers. This difference is less than obvious though.

Here's the beta method tetration over \( -1 \le \Re(z) \le 4 \) for \( |\Im(z)| \le 0.9 \).


   

And here's the Kneser tetration over \( -1 \le \Re(z) \le 4 \) for \( |\Im(z)| \le 1 \).

   

Both are clearly holomorphic; but they have an obvious disagreement on the real-line.

This is my first argument as to why the beta method is not Kneser's.




The second argument is topological in nature. For this we will call Kneser's solution \( \text{tet}_K(z) \) and the beta method \( \text{tet}_\beta(z) \)

Kneser's Method is similar to the standard iteration of a Schroder function about the fixed points \( L,L^* \) which are the fixed points of \( \exp(z) \) with minimal imaginary argument. Which is, we can find a pair of Schroder functions \( \Psi, \Psi^* \) and a pair of theta mappings \( \theta,\theta^* \) such that,

\(
\text{tet}_K(z) = \Psi(e^{Lz}\theta(z))\,\,\text{for}\,\,\Im(z) > 0\\
\text{tet}_K(z) = \Psi^*(e^{L^*z} \theta^*(z))\,\,\text{for}\,\,\Im(z) < 0\\
\)

Where we have the relation, \( \text{tet}_K(z^*) = \text{tet}_K(z)^* \); which allows for an analytic extension to \( z \in (-2,\infty) \) which is real-valued.

The beta method is vastly different. To begin we solve the equation \( \varphi_\lambda(z) \) for \( \Re \lambda > 0 \) such that,

\(
\varphi_\lambda(e^{-\lambda}z) = \exp \varphi_\lambda(z)\\
\varphi_\lambda(0) = \infty\\
\)

This produces a collection of tetration functions,

\(
F_\lambda(s) = \varphi_\lambda(e^{-\lambda s})\\
\)

with period \( 2 \pi i / \lambda \) with many many branchcuts/singularities. But regardless; this tetration is holomorphic on \( \mathcal{U} \subset \mathbb{C} \) with \( \overline{\mathcal{U}} = \mathbb{C} \). Which means it's holomorphic almost everywhere. Because of its periodicity we can think of \( F_\lambda \)'s domain topologically as an almost cylinder.

This means,

\(
F_\lambda(z) : \mathcal{U} \to \mathbb{C}\\
\overline{\mathcal{U}} \simeq \mathbb{C}/2\pi \mathbb{Z}\\
\)

Now we can normalize these tetrations; so that the singularities appear when \( z \) approaches the boundary of this cylinder. Therefore; we get something like this, for \( F_{1+0.1i}(s) \). Where the singularities occur on the boundary of the cylinder, but within the strip everything is normal and nice and looks like tetration.

   

The manner in which we construct \( \text{tet}_\beta \) is to move these singularities to the points at \( \Im(z) = \pm \infty \). Think, we are stretching the interior of the cylinder to be the upper and lower half planes; and putting the boundaries at infinity. This is done by using an implicit function \( \lambda(s) \) which we insert into the limiting construction. I chose \( \lambda(s) = \frac{1}{\sqrt{1+s}} \) for simplicity; but any similar mapping will work.

Now, the reason this matters; is that, theoretically when we take,

\(
\lim_{\Im(z) \to \infty} \text{tet}_\beta(z)\\
\)

we are technically approaching the boundary of the cylinder. And on this boundary we have all of the singularities of \( F_\lambda \). So we should expect (I haven't been able to prove this satisfactorily yet):

\(
\lim_{\Im(z) \to \infty} \text{tet}_\beta(z) = \infty\\
\)

Or at least, that,

\(
\lim_{\Im(z) \to \infty} \text{tet}_\beta(z) \not \to L\\
\)

Whereas,

\(
\lim_{\Im(z) \to \infty} \text{tet}_K(z) \to L\\
\)

This would instantly show that the beta method produces a vastly different tetration than Kneser's method.


The third argument I'm bringing to the table is the obvious one.

Nowhere in my construction do I need to compute a theta mapping. As this is by far the most crucial ingredient to Kneser's construction (and Sheldon's fatou.gp); it would be absurd to think we can construct it without said theta mapping. It would be, surprising, to say the least. In fact, it would bypass a lot of the finesse of Kneser's construction; or even Kouznetsov's approach. Something I highly doubt is possible.

The beta method makes zero mention of the fixed points \( L,L^* \). It is constructed entirely from the exponential function's behaviour at infinity. There is no data, no Schroder function, no asymptotics, which include the values \( L,L^* \). I think it would be quite surprising and quite incredible if this were Kneser's; which is why I further doubt it is.



In conclusion; I believe the beta method produces a novel tetration. And it has divergence at \( \Im(z) = \pm \infty \); which is drastically different than Kneser. I'm still working on constructing a proof; but I am definitely getting there. I need an out of the box thought though, to complete the argument.

Regards, James
Reply
#2
Graphically they're obviously two different functions... Also the limit at imaginary infinity seems a good argument because in that way the two solutions have a substantial  "topological" difference.

About the "stretching the cylinder" I suspect that that are many ways to do that. Is it possible that we have no way to compare those way to stretch the cylinder?
I mean... something like measuring where the purely imaginary tetrations go... idk... like tet(i)... (secretly I'm thinking about an Euler identity for tetration)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#3
(06/10/2021, 12:41 PM)MphLee Wrote: Graphically they're obviously two different functions... Also the limit at imaginary infinity seems a good argument because in that way the two solutions have a substantial  "topological" difference.

About the "stretching the cylinder" I suspect that that are many ways to do that. Is it possible that we have no way to compare those way to stretch the cylinder?
I mean... something like measuring where the purely imaginary tetrations go... idk... like tet(i)... (secretly I'm thinking about an Euler identity for tetration)

The best way I could think of this was measuring the period in \( z \); this is difficult to explain though.

In many ways,

\(
\text{tet}_\beta(z) = \lim_{n\to\infty} \lim_{\lambda \to 0} \beta_\lambda(z+n+x_0)\,\,\text{for}\,\,\lambda = \mathcal{O}(n^{-\epsilon})\\
\)

So the sequence has a period \( \ell_n \) in \( s \) of about,

\(
\ell_n = 2 \pi i \mathcal{O}(n^{\epsilon})\\
\)

So what we are doing is limiting \( \ell_n \to \infty \). Really, we're mapping \( \ell_n/2 = \infty \) and \( -\ell_n/2 = -\infty \).

I can't think of any non-obvious way to do this. The furthest I really got was using the implicit function theorem.

Let

\(
F_\lambda(z) - \text{tet}_\beta(z) = 0
\)

Which can define an implicit local solution. This function will be (as far as I can tell?) split into \( \lambda^+ \) and \( \lambda^- \) for the upper (resp. lower) half plane. Then,

\(
\lambda(z) = \sum_{k=1}^\infty c_k e^{2 \pi i k z}\\
\lambda(z) = \sum_{k=1}^\infty \overline{c_k} e^{-2\pi i k z}\\
\)

But this is about as far as I got; because using the implicit function theorem seems like a cop out. I'm still trying to think of a better argument for constructing the actual "stretching" function \( \lambda(z) \); other than the limit definition.
Reply
#4
Here's further evidence this isn't Kneser's tetration.

Here is \( \text{tet}_\beta(z) \) for \( 45 \le \Im(z) \le 50 \) and \( -1 \le \Re(z) \le 4 \). It's diverging to infinity pretty fast. Which again; means it's not going to be Kneser's tetration.

   
Reply
#5
I'm so sad. Why holomorphic can’t uniquely determine Tetration?
This is like to when you spend a lot of time solving the equation and finally find that the equation is underdetermined.
Reply
#6
(07/07/2021, 11:00 AM)Ember Edison Wrote: I'm so sad. Why holomorphic can’t uniquely determine Tetration?
This is like to when you spend a lot of time solving the equation and finally find that the equation is underdetermined.

Honestly it makes sense to me. Kneser's solution is characterized by the Schroder functions about \( L,L^* \); and by a decay to these constants at \( \Re(z) = -\infty \) and \( |\Im(z)| \to \infty \). Really, you can think of Kneser's solution as the only real valued iteration about the fixed points \( L,L^* \); so there is uniqueness. The trouble is these solutions are highly non-unique on the real line; even just throwing a periodic function in the mix can wig things up. But, I mean, why not just pick a fixed point out of a hat? That's how I sort of feel about Kneser.

I believe William Paulsen and Samuel Cogwill's uniqueness condition is quite beautiful (Which is largely just based around Henryk and Dmitrii's work). Personally though; I'm very opposed to using fixed points, it just feels unnatural to me--sort of, arbitrary; like "why that fixed point and not this one?". "A kind of" Fixed point at infinity seems a bit more natural to me. Also, tetration diverging to infinity as we increase the imaginary argument also seems more anomalous--I think it represents well just how whacky tetration is. If anything, I've just thrown a wrench in the gears; but I think it's a good thing. Have you seen what proposed pentations/hexations/septations look like with kneser?--they look less than desirable.

I think at this point, in the quest for "the right tetration"; which ever one runs faster and simpler and solves the storage of large numbers in a better way will probably win out. It's definitely Kneser's at the moment. I still feel Kneser is the superior tetration, simply because it's much better behaved, and taylor series are much easier to grab. I'm still having trouble making a non glitching program. God damn overflow errors. Need a perfect turing machine with geometric convergence speeds  Tongue  .

Regards, James
Reply
#7
Kneser's tetration was conjectured to have no singularities in the upper half plane ( for Im(z) > 0 ).
Or was that Kouznetsov ? If I recall correctly ; both ... implying they are identical and unique.

I could be wrong though.

After all these years, I see little public info about singularities. ( apart from trivial log singularities at expected places )

Not that it is easy though. For a taylor series we try to prove the radius of convergeance for a function expansion somewhere... usually by special properties or by patterns and asymptotics in the n th derivatives.

But usually we do not get a nice taylor series with proven trends.

We get something for which no efficient radius of convergeance method is known or is efficient.
Converting to a taylor series often does not help immediately and feels like a restatement of the problem or a nonconstructive circular logic.

Apart from uniqueness by singularities there is the idea of uniqueness by bounds.
( sometimes proven, sometimes conjectures ... im not even sure without thinking about it first )

Those are just my impressions though and I could be wrong.

Also I just scratched the surface of the large amount of things that could have been said about it.


regards

tommy1729
Reply
#8
(07/21/2021, 07:13 PM)tommy1729 Wrote: Kneser's tetration was conjectured to have no singularities in the upper half plane ( for Im(z) > 0 ).
Or was that Kouznetsov ? If I recall correctly ; both ... implying they are identical and unique.

I could be wrong though.

After all these years, I see little public info about singularities. ( apart from trivial log singularities at expected places )

Not that it is easy though. For a taylor series we try to prove the radius of convergeance for a function expansion somewhere... usually by special properties or by patterns and asymptotics in the n th derivatives.

But usually we do not get a nice taylor series with proven trends.

We get something for which no efficient radius of convergeance method is known or is efficient.
Converting to a taylor series often does not help immediately and feels like a restatement of the problem or a nonconstructive circular logic.

Apart from uniqueness by singularities there is the idea of uniqueness by bounds.
( sometimes proven, sometimes conjectures ... im not even sure without thinking about it first )

Those are just my impressions though and I could be wrong.

Also I just scratched the surface of the large amount of things that could have been said about it.


regards

tommy1729

What I was pointing out is that Samuel Cogwill and William Paulsen proved a uniqueness condition.

\(
F(z)\,\,\text{is holomorphic for}\,\,\Im(z) >0\\
F(0) = 1\\
F(z^*) = F(z)^*\\
F(z+1) = \exp(F(z))\\
\lim_{\Im(z) \to \infty} F(z) = L\\
\exp(L) = L\,\,\text{and it has minimal imaginary argument of all of exp's fixed points}\\
F : \mathbb{R}^+ \to \mathbb{R}^+\\
\Rightarrow\,\,F\,\,\text{is Kneser's Tetration}\\
\)

This implies that Kouznetsov is Kneser's solution; unless it has singularities in the upper half plane. The \( \beta \)-method changes only one thing, that \( \lim_{\Im(s) \to \infty} F(s) = \infty \). Kouznetsov has just found a different construction to Kneser. But it is Kneser's. At least, according to Paulsen and Cogwill. And their paper is peer reviewed and absolutely phenomenal. They really bring it down to earth. I suggest reading it.

The main difference in my method is divergence at imaginary infinity. If it doesn't diverge it's just Kneser.

As to Taylor Series; it isn't quite backwards as you may be thinking. I'm mostly referring to it as a programming strategy in pari-gp; not actually calculating the Taylor series analytically. It makes the programming somewhat more accurate and avoids the nasty hairs I keep seeing everywhere.

Regards.
Reply
#9
(07/22/2021, 03:47 AM)JmsNxn Wrote:
(07/21/2021, 07:13 PM)tommy1729 Wrote: Kneser's tetration was conjectured to have no singularities in the upper half plane ( for Im(z) > 0 ).
Or was that Kouznetsov ? If I recall correctly ; both ... implying they are identical and unique.

I could be wrong though.

After all these years, I see little public info about singularities. ( apart from trivial log singularities at expected places )

Not that it is easy though. For a taylor series we try to prove the radius of convergeance for a function expansion somewhere... usually by special properties or by patterns and asymptotics in the n th derivatives.

But usually we do not get a nice taylor series with proven trends.

We get something for which no efficient radius of convergeance method is known or is efficient.
Converting to a taylor series often does not help immediately and feels like a restatement of the problem or a nonconstructive circular logic.

Apart from uniqueness by singularities there is the idea of uniqueness by bounds.
( sometimes proven, sometimes conjectures ... im not even sure without thinking about it first )

Those are just my impressions though and I could be wrong.

Also I just scratched the surface of the large amount of things that could have been said about it.


regards

tommy1729

What I was pointing out is that Samuel Cogwill and William Paulsen proved a uniqueness condition.

\(
F(z)\,\,\text{is holomorphic for}\,\,\Im(z) >0\\
F(0) = 1\\
F(z^*) = F(z)^*\\
F(z+1) = \exp(F(z))\\
\lim_{\Im(z) \to \infty} F(z) = L\\
\exp(L) = L\,\,\text{and it has minimal imaginary argument of all of exp's fixed points}\\
F : \mathbb{R}^+ \to \mathbb{R}^+\\
\Rightarrow\,\,F\,\,\text{is Kneser's Tetration}\\
\)

This implies that Kouznetsov is Kneser's solution; unless it has singularities in the upper half plane. The \( \beta \)-method changes only one thing, that \( \lim_{\Im(s) \to \infty} F(s) = \infty \). Kouznetsov has just found a different construction to Kneser. But it is Kneser's. At least, according to Paulsen and Cogwill. And their paper is peer reviewed and absolutely phenomenal. They really bring it down to earth. I suggest reading it.

The main difference in my method is divergence at imaginary infinity. If it doesn't diverge it's just Kneser.

As to Taylor Series; it isn't quite backwards as you may be thinking. I'm mostly referring to it as a programming strategy in pari-gp; not actually calculating the Taylor series analytically. It makes the programming somewhat more accurate and avoids the nasty hairs I keep seeing everywhere.

Regards.

Agreed.

The beta method ( using my gaussian method in mind , but probably all (analytic ) beta methods ) is not kneser.

A certain riemann type mappings might however relate them.
( partitions are related )

Maybe I come back to that later.

Do you have a link to the paper ?

regards

tommy1729
Reply
#10
http://myweb.astate.edu/wpaulsen/tetration2.pdf
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  tetration vs Tetration iterates vs tetration superfunctions.. preferred method? both? leon 0 523 10/10/2023, 01:33 PM
Last Post: leon
  The ultimate beta method JmsNxn 8 3,634 04/15/2023, 02:36 AM
Last Post: JmsNxn
  Artificial Neural Networks vs. Kneser Ember Edison 5 2,462 02/22/2023, 08:52 PM
Last Post: tommy1729
  greedy method for tetration ? tommy1729 0 682 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 6,213 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 1,386 01/24/2023, 12:53 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 1,229 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 2,471 11/03/2022, 11:51 AM
Last Post: Gottfried
  Is this the beta method? bo198214 3 2,275 08/18/2022, 04:18 AM
Last Post: JmsNxn
  Describing the beta method using fractional linear transformations JmsNxn 5 3,274 08/07/2022, 12:15 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)