Arguments for the beta method not being Kneser's method
#40
(10/07/2021, 02:29 AM)sheldonison Wrote: edit: explaining the approximation.  This is by no means a proof, but it is an explanation.  I haven't tried to rigorously prove the approximation since I was mostly interested in finding the singularities.
let's suppose \( \beta(z-1)+z=2n\pi i \)
Then \( \beta(z-1)=-z+2n\pi i \)
Then \( \beta(z)=\frac{\exp(\beta(z-1))}{1+\exp(-z))} \)
If real(z) is large enough then this approximation is pretty good, and the denominator is approximately 1.
Then \( \beta(z)\approx\exp(\beta(z-1))\approx\exp(-z+2n\pi i)\approx\exp(-z) \)
now \( \tau(z)=-\ln(1+\exp(-z)) \) and if real(z) is large enough then \( \tau(z)\approx-\exp(-z) \)
So if we start with this approximation, then we might expect that nearby we will find an exact value where the following is exactly true:  \( \tau(z)=-\beta(z) \)

AHHH I see much more clearly. I think you are running into a fallacy of the infinite though.

(to begin, that should be \( \tau(z) \approx -\log(1+\exp(-z)) \), though (I'm sure it's a typo on your part).)

I am definitely not convinced by this argument. As pari has already proved to be rather unreliable with most of my calculations; and no less with the fact we need values of the form 1E450000 or so, to get accurate readouts taylor series wise; and pari overflows. This being; why Kneser is so god damned good. It displays normality conditions in the upper and lower half planes. Beta requires us to get closer and closer to infinity to get a better read out. And as we do this; we can expect:

\(
\lim_{\Re(z) \to \infty} \frac{\tau(z)}{\beta(z)} \to 0\\
\)

and that for large enough values, we avoid \( -1 \) entirely. Now we pull back from here. And that is the discussion at hand when talking about holomorphy.

The thing is... with 100 iterations or 1000 iterations or any finite amount n of iterations; we can expect:

\(
\frac{\tau^n(z)}{\beta(z)} = -1\\
\)

to happen without a doubt infinitely often.

The above identity relates to how close we are to solving the functional equation. And it should drop off at about \( - \log(1+e^{-s}) \) (that about is pretty loose; we drop off slightly slower, but still exponentially next to a constant). But additionally it only happens on a compact set of \( 0 < \Im(z) < \pi \) and each compact set requires a deeper and deeper level of iteration--and has a different O-constant. And that is the key.

Dealing with iterations of the exponential. We can expect, first of all,

\(
\limsup_{n} \beta(z+n) = \infty\\
\)

And secondly,

\(
\lim_n \frac{\tau(z+n)}{\beta(z+n)}=0\\
\)

These limits are pointwise; so we don't speak compactly at all. And this means, heuristically, even if \( \beta(z) \) is small, the value \( \tau(z) \) is small enough to cancel out and keep it away from -1.



To solidify everything I just said, I'll prove it in the simplest manner I can think of.

Consider the implicit function:

\(
e^{\beta(s) + x} - \beta(s+1) - y= 0\\
\)

As \( \Re(s) \to \infty \) this function is only satisfied by \( x= y = 0 \). Therefore if we make a function:

\(
\lim_{\Re(s) \to \infty} x(y,s) = \lim_{\Re(s) \to \infty}y(x,s) = 0
\)

These values always exist; and the derivatives are non zero in both variables; so it's an implicit function in the neighborhood of \( \Re(s)>R,\,0<|x|<\delta,\, 0<|y|<\delta \).

Now here's the kicker, we can assign a point at \( \Re(s) = \infty \) and \( x,y = 0 \)--which equates to \( \lim_{\Re(s) \to \infty} \tau(s) = 0 \). Which equates to:

\(
\lim_n \frac{\tau(s+n)}{\beta(s+n)} = 0\\
\)

Which equates to way off in the right half plane we are holomorphic.



Now a compromise I can see from what you're discussing is that it is or isn't nowhere analytic. It is definitely analytic in a neighborhood of \( \Re(s) = \infty \) by a rather elementary argument. But as we pull back... maybe there are singularities. I really doubt it though. What's far more likely is that there are overflows. And again, it's a symptom of my shitty programming. And further the confines of pari--which lose accuracy faster than I think you care to admit.

Why are your supposed "singularities" having absolutely no branching when we pull back. Quite frankly; because they are artifacts.

My diagnosis of the singularities is loss of accuracy in the sample points of beta... And furthermore, straight up artifacts.

If it has singularities; they are sparse in \( 0 < \Im(z) < \pi \). Perhaps though; it's nowhere analytic on \( \mathbb{R} \). I highly doubt it though.

I would need much more mathematical evidence before I agree to that.
Reply


Messages In This Thread
RE: Arguments for the beta method not being Kneser's method - by JmsNxn - 10/07/2021, 05:20 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Fractional tetration method Koha 2 6,050 06/05/2025, 01:40 AM
Last Post: Pentalogue
  The ultimate beta method JmsNxn 8 10,740 04/15/2023, 02:36 AM
Last Post: JmsNxn
  Artificial Neural Networks vs. Kneser Ember Edison 5 6,805 02/22/2023, 08:52 PM
Last Post: tommy1729
  greedy method for tetration ? tommy1729 0 3,013 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 17,853 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 4,998 01/24/2023, 12:53 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 4,606 12/30/2022, 11:27 PM
Last Post: tommy1729
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 6,730 11/03/2022, 11:51 AM
Last Post: Gottfried
  Is this the beta method? bo198214 3 6,031 08/18/2022, 04:18 AM
Last Post: JmsNxn
  Describing the beta method using fractional linear transformations JmsNxn 5 8,654 08/07/2022, 12:15 PM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)