The ultimate beta method
#1
Let's take a beta function:

\[
\beta(s+1) = e^{\beta(s)}/(1+e^{-s})\\
\]

Let's call a curve \(\gamma\) such that \(\gamma = \{|\beta'(s)| = 1\}\).

Let's secondarily call a curve \(\rho\), such that for sheldon/Kneser's function \(\rho = \{|\text{tet}_K'(s)| = 1\}\). The value \(\frac{d}{dz}\log(f(z+1)) = \frac{\frac{d}{dz}e^{f(z)}}{e^{f(z)}} = f'(z)\). We know from the Kneser equation that:

\[
|\text{tet}_K'(s)| = 1
\]

Is a continuous and analytic arc. It encircles zero. If we write the function beta which looks like these things we should be fine. But we are testing where and when:

\[
|\beta'(z)| \approx |f'(z)|\\
\]

Where we can uniquely define \(f\) and \(f'\) as the functions such that:

\[
\begin{align}
f(z+1) &= e^{f(z)}\\
|f'(z)| &= 1 = |\frac{d}{dz}\log(f(z+1))| = |\frac{f'(z+1)}{f(z+1)}|\\
\end{align}
\]

This forces a single variable solution. Which turns out to be Kneser's solution near \(z \approx 0\). We're integrating around zero here.

The central idea is that if \(|\beta'(z)| = 1\) creates a path; then this maps to a circle on Kneser's tetration around \(\text{tet}_K(-1) = 0\).

This is all I'm willing to share before I have my code right. But I see how to turn any \(\beta\) function into any tetration Tongue

You can also see Kneser as the unique function \(f\) such that:

\[
|f'(z+1)| =|e^{f(z)}|\\
\]

if we assume that \(|f'(z)| =1\).........

This is explained through basic level set calculus....
Reply
#2
(04/08/2023, 10:41 PM)JmsNxn Wrote: You can also see Kneser as the unique function \(f\) such that:

\[
|f'(z+1)| =|e^{f(z)}|\\
\]

if we assume that \(|f'(z)| =1\).........

This is explained through basic level set calculus....

What ?!?

That seems like a universal tetration property for complex continu ( not even analytic required ) solutions.

f(z+1) = exp(f(z))

so by the chain rule

f ' (z+1) = exp'(f(z)) f'(z) = exp(f(z)) * a = exp(f(z) + ln(a))

so if f ' (z) = a and a is a root of unity :

| f ' (z+1) | = | exp'(f(z)) f'(z) | = | exp(f(z)) * a | = | exp(f(z) + ln(a)) | = |a| | exp(f(z)) | = | exp(f(z)) |

QED

All we need is some vague notion of differentiable to define some at least a weak derivative consistant over the domains and ranges considered.



regards

tommy1729
Reply
#3
YEAH I KNOW TOMMY!

I WAS SUPER EXCITED LAST NIGHT AND DIDN'T PROVIDE ENOUGH DETAILS!

But essentially theres a level set \(\tau\) on kneser tetration that would be mapped to \(\gamma\) where \(|\beta'(\gamma(t))| = 1\) and \(|\text{tet}_K'(\tau(t))| = 1\). THIS LOOKS SUPER FUCKING PROMISING! The exact formula should also be unique. If we define a conjugate function WHICH MUST EXIST BY RIEMANN, such that \(\psi(\gamma(t)) = \tau(t)\). We can be sure that \(|\gamma'(t)| =1\) and \(|\tau'(t)| =1\). Then forcing \(\psi(0) = 0\) and \(\psi(-1) = -\infty\) we should have

\[
\psi(\text{tet}_K(z)) = \beta(\psi(z))\\
\]

I've already reduced the problem into the statement that:

\[
T(z) = \psi^{-1}(\beta(\psi(z)))\\
\]

And we must have:

\[
T'(z) = e^{ih(z)}\text{tet}_K'(z)\\
\]

Where \(h\) is real valued on \(\tau\)--this value will turn out to be precisely zero with some magic. Honestly, I'd considered this before, but I always thought it was a shot in the dark until I actually graphed the line \(|\beta'(z)| = 1\) across \(\beta(z)\); and it's an unbroken jordan curve from \(-\infty\) to \(\infty\). It also should be relatively easy to construct \(\psi\); and it should be possible to compute it rather effortlessly! Just need to do some contour integration.

The real trouble is that I only have a good idea of how to do this, with the assumption that Kneser is already constructed. I'm having trouble constructing \(\tau\) from scratch. But if you have any ideas on how to construct \(\tau\) from scratch, please share them! This would show that we can construct kneser from knowledge of a single unbroken level set. I'm focusing on implementing a fairly accurate rudimentary kneser function from Sheldon's program at the moment; that doesn't conflict with my code base. As great as Sheldon's program is; it's a logistic nightmare; and doesn't integrate well with my own code, which obeys the rules of pari much more. I should have some graphs up later tonight.

Take a look at this graph of \(\beta(z)\) with the level set \(|\beta'(z)| = 1\) for the moment! PERFECT CONTOUR JUST BEGGING TO BE FIDDLED WITH! I think the big hurtle will be showing that the left of this contour is a simply connected domain; which honestly, doesn't even look like much of a hurtle!!!! Then a standard riemann mapping should take care of the rest!!!!!!!!!!!!!!!

   
Reply
#4
ALRIGHT! I GOT IT!

The following function \(\psi(z)\) takes Tetration to the beta method with multiplier \(1\). I'm not going to go into too much detail how I found this, but I've pulled 2 all nighters testing my hypothesis and it appears to be working.

We begin by restricting \(\Im(s) > 0\); a similar analysis works for the lower half plane, just exchange \(L\) (and \(P\)) for its conjugate. We start by writing a very very strange beast:

\[
\psi(s) = \Omega_{j=1}^\infty \frac{e^z}{1+e^{-\log^{\circ j}(s)}}\,\bullet z \Big{|}_{z=P}\\
\]

Now, this object only converges when we have the value \(z=P\). \(P\) is the unique value such that:

\[
\frac{e^P}{1+1/L} = P\\
\]

And is nearest to \(L\) to satisfy this. Numerical calculation has:

\[
P=0.1568938457.... + i0.8419125707...
\]

This is just your run of the mill infinite composition. So to check convergence we check that:

\[
\sum_{j=1}^\infty \left| \frac{e^P}{1+e^{-\log^{\circ j}(s)}} - P\right| < \infty\\
\]

Now we know that for large \(j > J\) that \(\log^{\circ j}(s) = L + \frac{(\log^{\circ j-1}(s)-L)}{L} + O(s-L)^2\): Which ends up looking like \(L + O(1/L^j)\). This is pretty standard in dynamics; the upper half plane \(\Im(s) > 0\) is in the attracting basin of the fixed point \(L\). Then from here, we know that:

\[
\frac{e^P}{1+e^{-\log^{\circ j}(s)}} = P + O(\frac{1}{L^j})\\
\]

Which is a convergent series. It's a little technical to show this specific function converges; but I can show it; it's just ugly as hell. The rigorous statement is that, if we assume \(z_j \to P\), and \(\sum_j |z_j - P| < \infty\) then:

\[
\sum_{j=1}^\infty \left| \frac{e^{z_j}}{1+e^{-\log^{\circ j}(s)}} - P\right| < \infty\\
\]

This is precisely what our infinite composition looks like; so we're golden and:

\[
\psi(s)\,\,\text{is holomorphic for}\,\,\Im(s) > 0\\
\]

We can equally find this function for \(\Im(s) < 0\), just write \(\overline{\psi(\overline{s})} = \psi(s)\).

Now this function satisfies:

\[
\psi(e^s) = \frac{e^{\psi(s)}}{1+e^{-s}}
\]

So If I write:

\[
\psi(\text{tet}_K(z)) = h(z)\\
\]

Then:

\[
h(z+1) = \psi(e^{\text{tet}_K(z)}) = \frac{e^{h(z)}}{1+e^{-\text{tet}_K(z)}}\\
\]

This is the unique equation satisfied by \(\beta\) though; and therefore:

\[
\psi(\text{tet}_K(z)) = \beta(s)\\
\]

Once we've made the appropriate change of variables. This becomes increasingly difficult, but should certainly be doable. I may have to edit somethings; but we're really fucking close. Using Kneser's slog will probably make an appearance at some point to get this perfect... I hope not, but I still remain optimistic nonetheless.

For large \(\Re(z)\) we have that \(\psi(z) \to z\), as expected; and we map \(\psi(L) = P\) and \(\psi(\overline{L}) = \overline{P}\)--which is exactly as expected. I was screwing up earlier because I was trying to map the level sets exactly--through a conjugation. This is much cleaner; and maps the level set to whatever Kneser is doing here... But the fact the level set is so good is a great marker. It means that:

\[
\left|\psi'(\text{tet}_K(z))\big{|}_{z \in \gamma}\right| = \left|\frac{1}{\text{tet}_K'(z)}\big{|}_{z \in \gamma}\right|
\]

This describes an unknown level set with a known level set. It might make sense here to change the level set from \(|\beta'(z)| = 1\) into something more relaxed. Nonetheless the function \(\psi\) is constructed. And it's fucking beautiful! I'm working on coding it at the moment, and graphing it. Currently my code is super super super fucking slow; but it's kinda getting the job done.

The math should be solid for this; but I haven't proven everything yet. This is what I like to refer to as a "fringe infinite composition"--so you have to be very careful with how you evaluate it.

Finding \(\psi\) seems super easy now, and by which; we have the inverse:

\[
\text{tet}_K(z) = \psi^{-1}(\beta(s))\\
\]

The variable change in \(s\) to \(z\) is the hard part for the moment, I'll figure it out Smile . One step at a time!

And I think that these should be relatively untaxing in a computational sense. We can develop the taylor series of \(\psi\) near infinity rather efficiently because \(\psi(z) \approx z\), and the drop off should be like \(\psi(z) = z + O(e^{-z})\)....

This is "converting beta to Kneser" at it's finest. We don't even need Kneser to do it!

Regards, James

HERE'S A BEAUTIFUL GRAPH FROM \(|\Re(z)| < 3\) and \(|\Im(z)| < 3\) of the function \(\psi\) such that:

\[
\psi(e^z) = \frac{e^{\psi(z)}}{1+e^{-z}}\\
\]

   

Here's an even bigger graph for \(|\Re(z)| < 8\) and \(|\Im(z)| < 8\)

   

The hypothesis for now is that:

\[
\psi(z) = s\\
\]

where under the \(\beta\) mapping we have that \(z \mapsto e^z\), we map to \(\beta(s+1) = \frac{e^{\beta(s)}}{1+e^{-s}}\). I need some time to think on this more though, but this is still crazy that \(\psi\) is so easily found!!!!!!!!! Even if this isn't the \(\psi\) we're exactly looking for; it's really close!

The exact formula we are seeing right now is that:

\[
\psi(\text{tet}_K(z+1)) = \beta(\text{tet}_K(z) + 1)\\
\]

Assuming that \(\text{tet}_K(z) \approx L\) this formula is super accurate. I'm having trouble going from here to the next step. I'm going to focus on coding \(\psi^{-1}\) first; which does the opposite transformation...

I'm still thinking on this, but I think I'm nearly there.
Reply
#5
So, I screwed up my graphs of \(\psi\) above. I forgot that \(\psi\) is \(2 \pi i\) periodic. I was wondering why my code wasn't as accurate as it should be, and it's because:

\[
\psi(e^{z+2\pi i}) = \frac{e^{\psi(z+2 \pi i)}}{1+e^{-z-2\pi i}} = \frac{e^{\psi(z)}}{1+e^{-z}}\\
\]

Whereby, we can assume that \(\psi(z+2\pi i) = \psi(z)\). This creates a MUCH MORE accurate code base. Where as earlier, my code would get inaccurate near \(\Im(z) \approx \pi\). Sorry, I'm still learning as I go.

But here is a large graph of \(\psi(z)\) for \(|\Re(z)| < 10\) and \(|\Im(z)| < 10\). This code base always satisfies:

\[
\psi(e^z) = \frac{e^{\psi(z)}}{1+e^{-z}}\\
\]

No matter how large the imaginary part gets! The \(\psi\) function kind of behaves like Kneser's super logarithm. And it's pretty counter intuitive that \(\text{slog}_K(z+2\pi i) = \text{slog}_K(z)\). We get a similar kind of mapping here. But we're mapping:

\[
\psi(e^z) = e^{\psi(z)} + O(e^{-z})\\
\]

But it still suffers the periodic condition.

And does so for 100 digit accuracy. My earlier graphs would start to crap out past \(\Im(z) = \pi\)--but still look relatively good. This solves that problem entirely. It shows much more clearly how bizarre \(\psi(z)\) actually looks! We have a branching problem at each line \(|\Im(z)| = \pi k\) for all \(k \in \mathbb{Z}\). Which hilights precisely where \(e^{z} = -1\) and where we switch from \(L\) to \(\overline{L}\); which is as expected.

This means, the critical strip for \(\psi\) is for \(0 < \Im(z) < \pi\), and if you calculate this, you calculate everywhere in the upper/lower half planes.

Again, I'm still going along as I see it. But this appears to be working. I'll try to write the hard math later. For the moment I'm just checking the numbers support my hypothesis...

This function has plenty of branching problems, you can notice them off hand pretty fast. This really isn't that big of a deal, especially considering this is tetration. This function satisfies:

\[
\begin{align}
\psi(z+2\pi i) &= \psi(z)\\
\psi(e^z) &= \frac{e^{\psi(z)}}{1+e^{-z}}\\
\end{align}
\]

And it defines an analytic function for \(\Im(z) \neq k\pi \) for all \(k \in \mathbb{Z}\). As we only care about \(\psi\) in this domain, we are fine, because it works for what I hope to set up as the \(\psi\) conjugation of \(\beta\)... There are better ways to define the function \(\psi\), but for our purposes this is fine--and it highlights the perfection of our infinite composition formula much clearer.

The key point, I forgot when defining the infinite composition, is that \(\log( \{0 < \Im(z) < \pi\}) = \{0 < \Im(z) < \pi\}\) and this is a much cleaner way of thinking of it when we want to extend to the complex plane. This lines up perfectly with the poles of \(\frac{1}{1+e^{-z}}\) along the boundary.

So anyway, here is \(\psi\) to the best of my ability! It sure is an ugly fucker!

   

We can from here, draw a more stable \(\psi\) which only has it's flip at the real axis. This is done by noticing we assumed that:

\[
\psi(z+2\pi i) = \psi(z)\\
\]

When all the formula is really telling us is that:

\[
e^{\psi(z+2\pi i)} = e^{\psi(z)}\\
\]

This means that:

\[
\psi(z+2 \pi i) = \psi(z) + 2 \pi i k\\
\]

If we pay attention to the imaginary parts we gather, we can get a cleaner \(\psi\), which is witnessed by the above graphs from the post above, before this. We shouldn't worry too too much about this. But we definitely need to notice it. Without the assumption of periodicity, we should get that \(\psi(z)\) is holomorphic for \(\Im(z) > 0\). The thing is.... this doesn't happen. The natural result has this problem exactly at \(\Im(z) = \pi\); so the math is telling us that \(k = 0\). Which makes sense if we think of these objects as curves in \(\mathbb{C}\).

I'm still a little loss for words. So, for the moment, I'm confident I can construct:

\[
\psi(z)\,\,\text{is holomorphic for}\,\,0 <|\Im(z)| < \pi
\]

But I think there's a much cleaner numerical procedure; which should find the imaginary values we pick up as we go further in the imaginary axis.
Reply
#6
ALRIGHT!

So I no longer need the assumption \(\psi(z + 2\pi i) = \psi(z)\) in my code base. This was a fairly artificial condition, which would serve us fine for the real line; but would show too many errors as we go out. The correct version of \(\psi(z)\) is holomorphic for \(\Im(z) \neq 0\), where it is still analytic on \(\Im(z) = 0\) upto some singularities. Which we can expect to occur around \(0, 1 , e, e^e, e^{e^e},...\)--just as a normal theta function appears...

We can also expect a good amount of branching near these singularities; but still, we have that \(\psi(z)\) is holomorphic for \(\Im(z)>0\) and \(\Im(z) < 0\)--and it's related as \(\overline{\psi(\overline{z})} = \psi(z)\). So that this object is conjugate similar. The value on the real line is still escaping me, and near the real line my code kind of wonks out, but it still says something.

This version of \(\psi\) looks a lot like the first graphs I posted earlier today; but now the numbers line up much better. The earlier graphs only had 1 to 2 digit precision--where as the periodic version had 100 digit precision. This graphs the same thing as the first graphs, but graphs it with 100 digit precision.

So I've managed to write a fairly efficient algorithm to graph the function \(\psi(z+2\pi) = \psi(z) + 2\pi i k\), and find \(k\) in a quick and efficient way. It seems to hover near \(k=0\), but it can jump around a bit and can bite you in the ass if you aren't looking. 

This function can take something like:

\[
\psi(e^{100+i100}) = \frac{e^{\psi(100+i100)}}{1+e^{-100-i100}}
\]

And confirm it to 100 digits; and does so without requiring periodicity (it finds the value \(k\)).

Here is \(|\Re(z)| < 5\) and \(|\Im(z)| < 5\). It looks a lot like my original graphs from earlier today, but the numbers are much much much more exact!

   
Reply
#7
(04/12/2023, 12:15 AM)... Wrote: \[
\psi(e^{100+i100}) = \frac{e^{\psi(100+i100)}}{1+e^{-100-i100}}
\]

And confirm it to 100 digits; and does so without requiring periodicity (it finds the value \(k\)).

Here is \(|\Re(z)| < 5\) and \(|\Im(z)| < 5\). It looks a lot like my original graphs from earlier today, but the numbers are much much much more exact!

Ok Im gonna be a pain in the ass again but

say x = exp(100 + 100 i)

then psi(x) = exp( psi(ln(x)) )/(1 + 1/x)

and from earlier posts of you the claim (essentially by substitution ) 

psi(x) = beta(ln(x) + 1)

But then

beta(ln(x) + 1) = exp ( beta( ln(ln(x)) + 1 ) )/ ( 1 + 1/x )

and thus

beta(x+1) = exp( beta (ln(x) + 1) )/ (1 + exp(-x) )

But that is not the beta function !

beta was more like 

beta(x+1) = exp( beta(x) ) / (1+exp(-x) )

or something

Also beta was not analytic right ?

SO beta(ln(x) + 1) = psi(x) is also not analytic !?


And I have not even considered the connections to kneser.


Sorry but I am the hidden final boss Wink



regards

tommy1729

ps : 

btw i hate those pop science mags too ,
 bla bla revolution bla bla genius bla bla E=MC^2 bla bla but one smart researchers thinks he has a way to do it bla bla
  bla bla revolution bla bla genius bla bla time travel bla bla faster than light bla bla but maybe einstein was wrong bla bla or we misinterpreted him bla bla
CGI pictures bla bla

But no actual experiment or math in the whole article or whole magazine that proves or explaines anything.
Nobody will learn a darn thing, the eductated know more and the others are confused and think reading more will illuminate.
Science without good evidence or good math is BS.

Ok I added that to make you feel better lol 


[quote pid="12048" dateline="1681254902"]

\[
\psi(e^{100+i100}) = \frac{e^{\psi(100+i100)}}{1+e^{-100-i100}}
\]

And confirm it to 100 digits; and does so without requiring periodicity (it finds the value \(k\)).

Here is \(|\Re(z)| < 5\) and \(|\Im(z)| < 5\). It looks a lot like my original graphs from earlier today, but the numbers are much much much more exact!


[/quote]


[quote pid="12048" dateline="1681254902"]

\[
\psi(e^{100+i100}) = \frac{e^{\psi(100+i100)}}{1+e^{-100-i100}}
\]

And confirm it to 100 digits; and does so without requiring periodicity (it finds the value \(k\)).

Here is \(|\Re(z)| < 5\) and \(|\Im(z)| < 5\). It looks a lot like my original graphs from earlier today, but the numbers are much much much more exact!


[/quote]
Reply
#8
Oh you're definitely right, Tommy!

I'm still screwing this up like crazy. So I thought I'd make some clarifications. This is actually something I remember Sheldon talking about. The function I just defined is not actually what we want. After careful numerical evaluations, it's garbage. It's a super cool function, and it's close to what we want, but not exactly.

Let's assume we have constructed Kneser's super logarithm \(\text{slog}_K(z) = P(z)\). Then, we're going to define a holomorphic function, with inverse, which "switches between Kneser and Beta". The function \(\beta\) is holomorphic (the tetration it induces through iterated logarithms is not holomorphic, but \(\beta\) itself is). We will call this function:

\[
\chi(z) = \beta(P(z))\\
\]

The function:

\[
\beta(z+1) = \frac{e^{\beta(z)}}{1+e^{-z}}\\
\]

Which is certainly analytic, except at the points \(j + (2k+1)\pi i\) for \(j,k\in\mathbb{Z}\) and \(j \ge 1\). This function satisfies:

\[
\chi(\text{tet}_K(z)) = \beta(z)\\
\]

So long as we stay in the strip \(|\Im(z)| < \pi\). Here's a large graph of \(\chi\) for \(|\Re(z)|<10\) and \(|\Im(z)|<10\):

   

This function is absolutely beautiful in the left half plane \(\Re(z) < 0\); it is also \(2 \pi i\) periodic, so we can assume that:

\[
\chi(z) = \beta(-2) + \sum_{j=1}^\infty a_j e^{jz}\\
\]

For some coefficients \(a_j\). This is confirmed also, by graphing \(\chi(\log(z))\) which in theory should now be holomorphic for \(|z| < 1\). Which is confirmed by this graph (ignore the small branch cut at zero, it's an artifact of using \(\log\) here).

\[
\chi(\log(z)) = \beta(-2) + \sum_{j=1}^\infty a_j z^j\\
\]

   

I've been grueling trying to find the coefficients \(a_j\) without referring to Kneser's slogarithm \(P\). If we know \(P\) it's rather trivial to find \(a_j\). But my hypothesis at this moment, is that we don't need the super logarithm \(P\) to find \(\chi\).

This is the really cool part!

\[
\chi(e^z) = \frac{e^{\chi(z)}}{1+e^{-P(z)}}\\
\]

So that \(\chi\) has a fair amount of invariance under \(z \mapsto e^z\). Now, as we take \(\Re(z) \to -\infty\):

\[
\chi(0) = \frac{e^{\beta(-2)}}{1+e^{2}}
\]

And if we want:

\[
\chi(x + O(x^2)) = \frac{e^{\beta(-2)}}{1+e^2} + \lambda x + O(x^2)\\
\]

Where we should be able to recover \(\lambda\), without needing the super logarithm. The exact value is:

\[
\begin{align}
\lambda &= e^{-z} \frac{d}{dz}\frac{e^{\chi(z)}}{1+e^{-P(z)}} \Big{|}_{z=-\infty}\\
&= \lim_{z\to -\infty} e^{-z}\left(\chi'(z)\chi(e^z) + \frac{e^{-P(z)}P'(z) \chi(e^z)}{1+e^{-P(z)}}\right)\\
&= \lim_{z\to -\infty} e^{-z}\chi'(z)\chi(e^z)\\
&= a_1\cdot \chi(0)\\
\end{align}
\]

Because \(P'(z) \to 0\) as \(z \to -\infty\) faster than any power of an exponential. The super logarithm's derivative has insanely fast decay at \(-\infty\); certainly exponential, and certainly estimating it as \(e^{-e^{-z}}\) is a rather modest estimate.

But then, we can also find \(a_1\), which solves \(\lambda\), the value I am finding is \(a_1 = \beta'(-2)\cdot P'(0)\). This is found by just writing:

\[
\frac{d}{dz}\Big{|}_{z=0} \beta(P(z)-1) = \beta'(-2)P'(0)\\
\]

The essential philosophy I'm thinking, is that we can use \(\beta\) and basic transformations to get \(a_j\), using \(\beta\)--with zero mention of \(P\). Then if we invert \(\chi\), we have the entirely expected:

\[
\chi^{-1}(\beta(z)) = \text{tet}_K(z)\\
\]

This would, instead of mapping the level sets to themselves, map the behaviour on the level set. I forgot to do some substitutions earlier. I apologize.

This is still super up in the air, Tommy! I'm not trying to claim anything too strong yet! I'm mostly just floating ideas around! Don't scorch me! I'm still just as confused as you! But the numbers are seeming plausible!

We are mapping:

\[
\chi(\exp^{\circ n}(0)) = \beta(n-1)\\
\]

And then filling in the blanks, without using knowledge of Kneser... That's the goal at least! Tongue Tongue Tongue

Another quick way of thinking of this, is to write:

\[
\begin{align}
\chi(z) &= \beta(-2) + \sum_{j=1}^\infty a_j e^{-jz}\,\,\text{for}\,\,\Re(z) < 0\\
\chi(z) &= \beta(-1) + \sum_{j=1}^\infty b_j z^j\,\,\text{for}\,\,|z| < L\\
\end{align}
\]

And we find \(a_j\) as we find \(b_j\) etc etc, and we don't need the super logarithm to do this. We just need \(\beta\)....

Honestly any comments are greatly welcomed!

Sincere, Regards, James---go easy on me, Tommy!  Tongue
Reply
#9
So, with only using the value \(\text{slog}'(0)\) from Sheldon, I've managed to draw \(\chi(z)\) to about 2-3 digit accuracy. THIS IS MONUMENTAL. I do not need Kneser's riemann mapping, or Sheldon's version of it--other than the value \(\text{slog}'(0)\). We don't even need sheldon to find this value; it is precisely \(\lim_{z\to 0} \text{slog}'(\log(z))/z = \lim_{z \to - \infty} \text{slog}'(z)/\exp(z)\). This value is findable using Kouznetsov, and any good mathematician worth their salt can find this value. I'm just using sheldon's program because it's quick and easy to calculate this value. But you don't need Kneser or Sheldon, per se.

Here is a graph of the beta-chi function, which I remind the reader:

\[
\chi(z) = \beta(\text{slog}_K(z))\\
\]

Without constructing \(\text{slog}_K(z)\) using any pre-existing function. So all we need is \(\beta\), and no mention of Sheldon's code.

   

As a by-product of constructing \(\chi\) we actually end up constructing Kneser's slog. This isn't that hard to see if we think about this more abstractly. \(P = \text{slog}_K(z)\) is the unique function such that:

\[
\chi(e^z) = \frac{e^{\chi(z)}}{1+e^{-P(z)}}\\
\]

So if you construct one accurately, you construct the other! So I already have some code for Kneser's slog; which again uses none of sheldon's code, other than the acquisition of \(\text{slog}'(0)\). For now my code is only safe for 2-3 digits. But I'm considering this a milestone that this method will work. And if you have \(\chi\), then functionally you've constructed kneser from \(\beta\). Where we get:

\[
\chi(\text{tet}_K(z)) = \beta(z)\\
\]

Which, I've confirmed for more digits than the slogarithm; about 4-5 digits.

I should also add, that when I say "confirmed to 2-3 digits"--I mean the Taylor series are correct upto to 2-3 significant digits. So if \(\frac{d^j}{dx^j} \text{slog}_K^{(j)}(x)/j! = A\). Then my code finds \(A\) upto the first 2-3 significant digits. I'm still having a few mistakes; but otherwise everything works flawlessly.

The math makes perfect sense too. We don't actually need \(\text{slog}_K\) to construct \(\chi\)'s exponential series. We just need it's behaviour near \(-\infty\) which has exponential decay to \(-2\); which makes it fairly easy--even easier when all we need is the first order expansion.

My goal is a little blurry at this point; but I think I'm getting close to the right answer! I believe; the independent construction of \(\chi\) attached with \(\beta\) produces a novel construction style for Kneser. And it's actually fucking working this time  Shy . This makes much more sense than my earlier attempts. We're legitimately just writing:

\[
\text{tet}_K(z) = \chi^{-1}\left(\beta(z)\right)\\
\]

And since \(\beta\) already looks so much like tetration this should be possible, so long as \(|\Im(z)| < \pi\). Getting it to work outside of the period case, will be difficult, but should be doable. But I'll cross that bridge after I construct \(\chi\) to desired accuracy... I will cross that bridge when I get there.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  greedy method for tetration ? tommy1729 0 742 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 6,626 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 1,496 01/24/2023, 12:53 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 1,324 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 2,605 11/03/2022, 11:51 AM
Last Post: Gottfried
  Is this the beta method? bo198214 3 2,412 08/18/2022, 04:18 AM
Last Post: JmsNxn
  Describing the beta method using fractional linear transformations JmsNxn 5 3,491 08/07/2022, 12:15 PM
Last Post: JmsNxn
Question The Etas and Euler Numbers of the 2Sinh Method Catullus 2 1,905 07/18/2022, 10:01 AM
Last Post: Catullus
  The ultimate sanity check Daniel 26 12,663 07/17/2022, 10:08 AM
Last Post: bo198214
  Tommy's Gaussian method. tommy1729 34 23,931 06/28/2022, 02:23 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)