Tetration Forum

Full Version: Searching for an asymptotic to exp[0.5]
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
(08/02/2014, 03:44 PM)sheldonison Wrote: [ -> ]
(08/02/2014, 12:26 PM)tommy1729 Wrote: [ -> ]Some further thought :

\( a_n = \exp(\exp^{0.5}(h_{n}) - n h_{n}) + a_{n-1}/a_n) \)

Hey Tommy,

I'm not able to understand the \( a_{n-1}/a_n \) part of your equation; these are big numbers. For n=50, \( a_{n-1}/a_n=98400028.22712 \), perhaps you intended a ln somewhere?

I edited to remove the nonsense.
Another try

tommy-sheldon method 1.0

f(x) is the half-iterate from the sinh method.

Start from a0 , a1 then solve a2 , a3 , ... with

a_(n-1) x^(n-1) + a_n x^n = f(x)

a_n x^n ( 1 + a_(n-1)/a_n 1/x ) = f(x)

ln(a_n) + n ln(x) + ln( 1 + a_(n-1)/a_n 1/x ) = ln(f(x))

ln(a_n) + n x + ln( 1 + a_(n-1) / a_n exp(-x) ) = f(x)



a_(n-1)/a_n = exp(C(n) a_n)



ln(a_n) + n x + ln( 1 + exp(-x + C(n) a_n) ) = f(x)

And now there are a few ways to continue.
Not sure which is the best at the moment.
But a good numerical method must clearly exist.

I considered how to test how good our equations are.

seems quite simple actually , we test a function from which we know the Taylor series.

For instance :

solve a_n x^n = exp(x)
vs
solve a_(n-1) x^(n-1) + a_n x^n = exp(x)

and compare both to 1/n!.

Btw this idea of estimating a Taylor series without much knowledge of taking derivatives , hence without using taylor's theorem seems very appealing to me.

In number theory this is also considered a bit , and maybe it should be considered more.

For instance when we have g(z) = g0 + g1 x + g2 x^2 + ...
and g(z)^7 = g0(7) + g1(7) x + ...

and g_n is either 1 or 0 then g(z)^7 might have
g_(n-1)(7) >= g_n(7) >= g(n+1)(7)

for all n.

This shows a connection to additive number theory.

HOWEVER g(z) is not entire.
THEREFORE usually we need some tricks to make the ideas work.
AND to make g*_(n-1)(7) >= g*_n(7) >= g*(n+1)(7) for the appropriate associated g*.

I know this is all a bit vague.
But it seems also to be correct.

By having an a_n depending on an a_(n-1) it seems every terms adapts depending on wheither the previous was estimated to high or too low.
Some optimism (OR ALOT) suggests then that these a_n are not Always an overestimate or not Always an underestimate but make a wave around the correct values.

I assume then that by tommy-sheldon method 1.0
we get TS1(x) = a_0 + a1 x + ...

TS1(x)/exp^[0.5](x) = O(ln(x)^2)

But It seems optimistic.
Maybe if it also depends on a_(n-2) and a_(n-3).


Oh btw ln(1+exp(z)) has an intresting Taylor series expansion.
Maybe that should be used.

regards

tommy1729
A conjecture for x >> 2 and functions with decreasing positive derivatives.

let some f(x) have dominant term a_n x^n
let f(f(x)) have dominant term b_m x^m.

conjecture A :
If |f(x)| < exp^[1/3](x)
then f(f(x)) ~ a_n (a_n x^n)^n = b_m x^m

thus b_m = a_n ^ (n+1) and m = n^2.

conjecture B :

(reverse of A)

If |f(f(x))| < exp^[2/3](x)
then b_m = a_n ^ (n+1) and n = sqrt(m)+o(1).

conjecture C :

If |f(f(x))| < exp^[2/3](x)
then b_m = a_n ^ (n+1) and n = sqrt(m)+o(1).
and if a_n = b_m^{1/(n+1)} does not converge fast enough , then f(x) is not entire and there is a complex z with |z|<1 such that its nearest singularity is of type a_0 + a_1 x + ...

I havent considered it alot , it might need modification or perhaps even very false.

But I wanted to share it now.

regards

tommy1729

(08/02/2014, 11:48 PM)tommy1729 Wrote: [ -> ]....
f(x) is the half-iterate from the sinh method.

I assume since sinh is an odd function, that f(x), the asymptotic half iterate, would also be an odd function, though this is not required. As I remember, the closest singularity to the origin for sinh^{0.5} is on the imaginary axis.

For f(x) asymptotic to exp^{0.5}, the branch cut is on the negative real axis. For the asymptotic to sinh^{0.5}, one possible branch cut is on the imaginary axis, in both directions, which would lead to an odd function, with even Taylor series coefficients=0. Is this what you had in mind?
(08/03/2014, 04:54 AM)sheldonison Wrote: [ -> ]
(08/02/2014, 11:48 PM)tommy1729 Wrote: [ -> ]....
f(x) is the half-iterate from the sinh method.

I assume since sinh is an odd function, that f(x), the asymptotic half iterate, would also be an odd function, though this is not required. As I remember, the closest singularity to the origin for sinh^{0.5} is on the imaginary axis.

For f(x) asymptotic to exp^{0.5}, the branch cut is on the negative real axis. For the asymptotic to sinh^{0.5}, one possible branch cut is on the imaginary axis, in both directions, which would lead to an odd function, with even Taylor series coefficients=0. Is this what you had in mind?

Just as you use kneser in post 9 , I use the 2sinh here.
So just for numerical reasons.
Im still on the real line.

It is convenient since it satisfies ln(f(exp(x))) = f(x) what simplifies the equations.

regards

tommy1729
(08/03/2014, 08:46 AM)tommy1729 Wrote: [ -> ]
(08/03/2014, 04:54 AM)sheldonison Wrote: [ -> ]
(08/02/2014, 11:48 PM)tommy1729 Wrote: [ -> ]....
f(x) is the half-iterate from the sinh method.

I assume since sinh is an odd function, that f(x), the asymptotic half iterate, would also be an odd function, though this is not required. As I remember, the closest singularity to the origin for sinh^{0.5} is on the imaginary axis.

For f(x) asymptotic to exp^{0.5}, the branch cut is on the negative real axis. For the asymptotic to sinh^{0.5}, one possible branch cut is on the imaginary axis, in both directions, which would lead to an odd function, with even Taylor series coefficients=0. Is this what you had in mind?

Just as you use kneser in post 9 , I use the 2sinh here.
So just for numerical reasons.
Im still on the real line.

It is convenient since it satisfies ln(f(exp(x))) = f(x) what simplifies the equations.

regards

tommy1729

post#16, the Gaussian approximation would work for the 2sinh method. For the Gaussian approximation for exp^{0.5}, the error term for the ratio to the "true" Taylor series coefficient varies from 0.02 for the a1 coefficient, falling to 0.00048 for a20, and falling to 0.000018 for the a300. For large positive numbers, 2sinh^{0.5} behaves similarly to exp^{0.5}, but in the complex plane, the similarity goes away as you approach the negative real axis.

I wonder if I can come up with a general equation, for interpolating f(x). It would be something along the lines of
\( g(z) =\ln(f(\exp(z))\;\;\; \) conveniently, for \( \;f(z)=\exp^{0.5}(z) \;\; g(z)=\exp^{0.5}(z) \)

Then \( h^{-1}(z)= \frac{d}{dz}g(z) \) would be defined as the derivative of g, and its inverse would be h(z), where h(n) would be the optimal "numerical" point to calculate the nth derivative. If g(x) is real valued at pi i, you have the trivial case, and you take the integral from -pi i to +pi i. Otherwise, find the real minimum of g(y+iz), and that is what you use to take the integral, from post#70. Of course, zeros of f(z) get in the way, as do singularities of f(z).... But in some cases, like exp^{0.5}, the integral converges and all of the Taylor series coefficients converge and are defined as y goes to infinity.

Here is the generalized equation, modified, from post#70

\( \text{dgi}(y,z)=\frac{d}{dz} (g(y+zi)+g(y-zi)) \;\;\; mi(y) = \text{dgi}^{-1}(y,z)=0 \)
\( a_n = \lim_{y \to \infty} \frac{1}{2\pi}
\int_{-mi(y)}^{+mi(y)} \exp(g (y+iz) - n(y+iz))\;dz \)

If there are zeros, or if the limit does not converge as n goes to invinity, then we go back to using y=h(n) which would be defined. h(n) is the inverse of the derivative of g(z), and is an optimal value to use for y for the nth derivative. It would be interesting to analyze how the integral behaves for other functions, like 2sinh, or maybe even tet(z), but if it is not well behaved then we have the following, where the exp cancels the logarithmic singularities dues to zeros in g.
\( a_n \approx \frac{1}{2\pi} \int_{-\pi i}^{\pi i} \exp(g (h_n+iz) - n(h_n+iz))\;dz \)
(08/02/2014, 04:34 PM)sheldonison Wrote: [ -> ]\( \text{dhalfi}(y,z)=\frac{d}{dz} (\exp^{0.5}(y+zi)+\exp^{0.5}(y-zi)) \;\;\; mi(y) = \text{dhalfi}^{-1}(y,z)=0 \)
....Each z=pi i corresponds to half way around the circle. For example, for y=2, we get mi(2)~=5.65pi, so we wrap the approximation around the circle 5.65 times, for a radius of exp(2). This limit converges rapidly as y increases.

\( a_n = \lim_{y \to \infty} \frac{1}{2\pi}
\int_{-mi(y)}^{+mi(y)} \exp(\exp^{0.5} (y+iz) - n(y+iz))\;dz \)

Here are a couple of pictures showing how the equations for a_n converge. The top picture shows the optimal number of Pi steps for each value of y; this would be exactly mi(y)/Pi. The black line in the top picture shows how many pi steps are required to match the 1/x laurent error term; this is useful to understand which of the error terms are dominant in the approximation (more later).

The bottom picture shows the exponential improvement in the accuracy of the approximations for a_n, via the black error term, at the same time as the function is rapidly growing (purple). This is the graph for the a0 coeffient precision required for the calculations (purple-black) to get the optimal precision possible, in black.
[attachment=1108]
tommy - sheldon method 1.1

Basicly the idea is to turn the disagreement fo the a_n by the method into agreement about the a_n.

f(x) is the half-iterate from the 2sinh method.

... + a_(n-1) x^(n-1) + a_n x^n + ... = f(x)

a_n x^n ( ... + a_(n+1)/a_n x + 1 + a_(n-1)/a_n 1/x + a_(n-2)/a_n /x^2 + ...) = f(x)

ln(a_n) + n ln(x) + ln( ... + 1 + a_(n-1)/a_n 1/x + ...) = ln(f(x))

ln(a_n) + n x + ln( ...+ a_(n+1)/a_n exp(x) + 1 + a_(n-1) / a_n exp(-x) + a_(n-2)/a_n exp(-2x) + ... ) = f(x)

a_n = MIN (f(x) - n x - ln( ... + 1 + a_(n-1) / a_n exp(-x) + a_(n-2)/a_n exp(-2x) + ... ))

And then use a system of n+1 MIN equations for the first n+1 values a_0 , a_1 , ... , a_n.


example n = 4

a_0 = MIN (f(x) - ln( a_4/a_0 exp(4x) + ... + 1 ))

a_1 = MIN (f(x) - x - ln( a_4/a_1 exp(3x) + ... + 1 + a_0 / a_1 exp(-x) ))

a_2 = MIN (f(x) - 2 x - ln( a_4/a_2 exp(2x) + ... + 1 + a_1 / a_2 exp(-x) + a_0/a_2 exp(-2x) + ... ))

a_3 = MIN (f(x) - 3 x - ln( a_4/a_3 exp(x) + 1 + a_2 / a_3 exp(-x) + a_1/a_3 exp(-2x) + a_0/a_3 exp(-3x) ))

a_4 = MIN (f(x) - 4 x - ln( 1 + a_3 / a_4 exp(-x) + a_2/a_4 exp(-2x) + ... a_0/a_4 exp(-4x)))

---

The urgent questions are uniqueness and existance for such systems of equations with a fixed n.

And then the convergence as n grows for each a_i.

But intuitively it seems correct.

How to solve these systems of MIN is another matter.

Probably by taking the derivative and/or Lagrange multipliers.

Then iterating ; going from a guess to a better one.

a_i(1) -> a_i(2).


regards

tommy1729
I updated the tommy sheldon 1.1 method ( previous post ) by making an edit.

Edits do not show up as new posts so you might have missed that.
Now you know.

I wonder what you guys think.

regards

tommy1729
For sufficiently large k the truncated exp_k(x) = 1 + x + x^2/2 + ... x^k/k! Always has zero's with a positive real part.

This might be surprising to some because exp(-oo) = 0 is the only solution of f(z) = 0 and " -oo " means a infinitely large negative real part.

Yet every truncated exp_k(x) (for large k) has zero's with a positive real part.

Now from many theorems about the remainders of Taylor polynomials we get a " radius of good approximation " as I like to call it.

This radius " pushes out " the zero's of exp_k(x).

The divisors of k also have some influence.

But what is also striking is that the zero's tend to lie on a well defined curve.

That curve is like a twisted U shape.

The curve for exp_(k+1) Always seems to be a rescaling of the curve for exp_k.

So a certain shape seems to appear.

It seems the ratio of the range of im(z) to the range of re(z) approaches a small fraction.

Im fascinated by this.

It reminds me of the fake half-iterate f(x).
Mainly because f(f(x)) also has it's zero's on a twisted U shape !!

I conjecture a connection although Im not sure how exactly.

Maybe this has been investigated before and is a classic problem ? Maybe not.

Maybe the idea that the zero's and shapes of f(f(x)) and exp_k are connected comes from the fact that :

exp(x) = lim k -> +oo exp_k(x) = lim m -> +oo f(f(x+m))/exp(m).

Wonder what you guys think.

regards

tommy1729
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21