Reviving an old idea with 2sinh.
#1
Im considering an old idea of myself again.

This is not " the 2sinh method " for all clarity.

But it also uses 2sinh.

f(n,x) = 2sinh(x + n)*exp(-n) = exp(x) - exp(- x - n).

Notice how this function f(n,x) approximates exp(x) very well for Re(x) > - n/2.

It also has exactly one real fixpoint with a derivative there about the same value as minus the fixpoint.

For instance sinh(x +10) exp(-10) has a fixpoint at -23.1416 and the derivative there is 23.1409.
This approximates exp(x) very well for Re(x) > -5.

In fact the derivative converges (conveniently ?) to minus the fixpoint as n goes to +oo. 

The idea is now ofcourse that 

lim  f(n,x)^[s] = exp^[s](x)

I strongly recommend using fast converging methods for the fixpoint method, and by that i * assume * higher order derivatives are neccessary.
I think it will be both important numerically and theoretically.

For Re(x) > -n + 1 we get about the same fixpoints and cycles as for exp(x) since it converges so fast to it.

But for Re(x) << - n + 1 the situation is not yet clear to me. Although for large negative x we get close to sinh(x) again.
This matters for radius and analytic continuation ofcourse.

Im not sure if this differs from other methods but the way of computation seems not equivalent.

I want to point out that a converging analytic ( near the real line ) sequence of functions is also necc analytic , this is a theorem !

So if it is converging and c^oo it MUST be analytic for Re(x) > -n/2 and by the limit for all x near the real line.

I assume this old idea was missed by most since I did not give it much attention and investigated other ways.

But I feel we are ready for this.

regards

tommy1729

Tom Marcel Raes
Reply
#2
For instance we could work with ( but yes i adviced faster ways )

t(r,x) = lim ( g(n,x)^[n] - x_n )  (x_n)^(n+r) 

where g(n,x) is the inverse of f(n,x) ( with fixpoint x_n and derivative about 1 / (-x_n) * and exactly that at n = oo ) 

Such that t(r+1,x) = t(r,exp(x))

Basicly koenigs function.
Reply
#3
i advise to use this for the computation around the fixpoint

(a x + k x^2 + ...)^[t] = a^t x + k a^(t-1) ( a^t - 1 )/(a-1) x^2 + ...

This quadration approximation is then plugged in 

f^[t](x) = lim f^[n] ( a^t y + k a^(t-1) ( a^t - 1 )/(a-1) y^2 )

where y = f^[-n](x) and a and b can be easily computed from the closed form for x_n and taylors theorem.

Call it the quadratic fixpoint formula or so Smile 

regards

tommy1729
Reply
#4
(06/13/2022, 10:52 PM)tommy1729 Wrote: i advise to use this for the computation around the fixpoint

(a x + k x^2 + ...)^[t] = a^t x + k a^(t-1) ( a^t - 1 )/(a-1) x^2 + ...

This quadration approximation is then plugged in 

f^[t](x) = lim f^[n] ( a^t y + k a^(t-1) ( a^t - 1 )/(a-1) y^2 )

where y = f^[-n](x) and a and b can be easily computed from the closed form for x_n and taylors theorem.

Call it the quadratic fixpoint formula or so Smile 

regards

tommy1729

Hey, Tommy. Not to let you down or anything, but this is a known procedure. It can be done upto any polynomial, not just quadratic.

It essentially operators off the principle of instead of choosing a fixed point, we choose a fixed polynomial. So we know the first \(3\) coefficients of the Taylor polynomial near the fixed point. Iterate a slightly different Kneser formula, but do it for a fixed point which is polynomial. You'll get a far faster solution. Same Kneser algorithm, you just do it for polynomials.
Reply
#5
(06/14/2022, 01:22 AM)JmsNxn Wrote:
(06/13/2022, 10:52 PM)tommy1729 Wrote: i advise to use this for the computation around the fixpoint

(a x + k x^2 + ...)^[t] = a^t x + k a^(t-1) ( a^t - 1 )/(a-1) x^2 + ...

This quadration approximation is then plugged in 

f^[t](x) = lim f^[n] ( a^t y + k a^(t-1) ( a^t - 1 )/(a-1) y^2 )

where y = f^[-n](x) and a and b can be easily computed from the closed form for x_n and taylors theorem.

Call it the quadratic fixpoint formula or so Smile 

regards

tommy1729

Hey, Tommy. Not to let you down or anything, but this is a known procedure. It can be done upto any polynomial, not just quadratic.

It essentially operators off the principle of instead of choosing a fixed point, we choose a fixed polynomial. So we know the first \(3\) coefficients of the Taylor polynomial near the fixed point. Iterate a slightly different Kneser formula, but do it for a fixed point which is polynomial. You'll get a far faster solution. Same Kneser algorithm, you just do it for polynomials.

Hahaha ofcourse I am aware this fixpoint method is nothing new.
Im aware of it for over 30 years , it has been talked about here for over 13 years.
Gottfried mentioned it here and on his site.
Carleman matrices , bell matrix and sometimes vandermonde matrices are related.
It has been used for over 2000 years.
It occurs in many many books , webpages , wiki and handwritten notes since the 1930's.

There are even some trends and theorems about them for the higher degrees.
I probably wrote about it and related things before.

Now I do admit that I am not expert at it though.
I do not know degree 5 by heart nor the pattern and I have questions and conjectures.

Im willing to talk about it and I would love references.

Im currently talking about it in another thread ; comparing the methods , their speed accelerating methods and imo most important : if and when they converge to the same function.

But the main point here was to mention that I advise a faster method than schröder/kneser/koenings.
( I consider koenigs the correct term : kneser used a modified version of koenigs , whereas schröder is the equation it satisfies but not the method itself.
one can argue about it ...  Btw Im also aware that sometimes it is better to transform the schröder equation to the Julia equation or the böttcher equation or others. Im also aware of what leo wrote. I wrote some computer codes to experiment. And I experimented with matrix splitting to accelerate the methods. There are connections to number theory too. Im not a total beginner Wink  Smile )

I conjecture that using taylor polynomials as approximation is optimal on average ( for random functions ) for such formulas.

I apprecciate your comments though.

It is hard to smell what people know or do not , especially when you cant smell them.

***

It is not entirely clear to me what fixpoint method makes the idea work and produces analytic tetration.
But I think this " quadratic fixpoint formula " produces a working one.

I was not really claiming it as my own formula by giving it a name.
It is not " tommy's quadratic fixpoint formula " or so.

It is just that I am unaware of its real name ... 
If it even has a real name.

"truncated polynomial method of degree 2" or "polynomial iteration mod x^3" do not sound so nice imo.



regards

tommy1729
Reply
#6
(06/14/2022, 06:02 PM)tommy1729 Wrote:
(06/14/2022, 01:22 AM)JmsNxn Wrote:
(06/13/2022, 10:52 PM)tommy1729 Wrote: i advise to use this for the computation around the fixpoint

(a x + k x^2 + ...)^[t] = a^t x + k a^(t-1) ( a^t - 1 )/(a-1) x^2 + ...

This quadration approximation is then plugged in 

f^[t](x) = lim f^[n] ( a^t y + k a^(t-1) ( a^t - 1 )/(a-1) y^2 )

where y = f^[-n](x) and a and b can be easily computed from the closed form for x_n and taylors theorem.

Call it the quadratic fixpoint formula or so Smile 

regards

tommy1729

Hey, Tommy. Not to let you down or anything, but this is a known procedure. It can be done upto any polynomial, not just quadratic.

It essentially operators off the principle of instead of choosing a fixed point, we choose a fixed polynomial. So we know the first \(3\) coefficients of the Taylor polynomial near the fixed point. Iterate a slightly different Kneser formula, but do it for a fixed point which is polynomial. You'll get a far faster solution. Same Kneser algorithm, you just do it for polynomials.

Hahaha ofcourse I am aware this fixpoint method is nothing new.
Im aware of it for over 30 years , it has been talked about here for over 13 years.
Gottfried mentioned it here and on his site.
Carleman matrices , bell matrix and sometimes vandermonde matrices are related.
It has been used for over 2000 years.
It occurs in many many books , webpages , wiki and handwritten notes since the 1930's.

There are even some trends and theorems about them for the higher degrees.
I probably wrote about it and related things before.

Now I do admit that I am not expert at it though.
I do not know degree 5 by heart nor the pattern and I have questions and conjectures.

Im willing to talk about it and I would love references.

Im currently talking about it in another thread ; comparing the methods , their speed accelerating methods and imo most important : if and when they converge to the same function.

But the main point here was to mention that I advise a faster method than schröder/kneser/koenings.
( I consider koenigs the correct term : kneser used a modified version of koenigs , whereas schröder is the equation it satisfies but not the method itself.
one can argue about it ...  Btw Im also aware that sometimes it is better to transform the schröder equation to the Julia equation or the böttcher equation or others. Im also aware of what leo wrote. I wrote some computer codes to experiment. And I experimented with matrix splitting to accelerate the methods. There are connections to number theory too. Im not a total beginner Wink  Smile )

I conjecture that using taylor polynomials as approximation is optimal on average ( for random functions ) for such formulas.

I apprecciate your comments though.

It is hard to smell what people know or do not , especially when you cant smell them.

***

It is not entirely clear to me what fixpoint method makes the idea work and produces analytic tetration.
But I think this " quadratic fixpoint formula " produces a working one.

I was not really claiming it as my own formula by giving it a name.
It is not " tommy's quadratic fixpoint formula " or so.

It is just that I am unaware of its real name ... 
If it even has a real name.

"truncated polynomial method of degree 2" or "polynomial iteration mod x^3" do not sound so nice imo.



regards

tommy1729

Honestly, I don't know what it's called either. It's the central topic of Kouznetsov's method though. At least, to a degree.

We guess the solution to:

\[
\Psi^{-1}(\lambda z) = f(\Psi^{-1}(z)) + O(z^k)\\
\]

So that:

\[
\Psi^{-1}(z) = \sum_{j=0}^k a_j z^k + O(z^k)\\
\]

For some fixed \(k\). You can choose the Kneser solution to this, or the Koenigs solution, or what ever (you're right, I should be clearer about this). Then you iterate:

\[
f^{\circ n}(\Psi^{-1}(\lambda^{s-n}))
\]

Which will converge to a super function \(F(s)\) such that \(f(F(s)) = F(s+1)\). You can create Kneser, or you can create Koenigs (Kouznetsov and Bo call this regular iteration--I think that should be the standard), or you can create whatever (you can do this procedure for \(\beta\) too). The larger you let \(k\) be in the original solution, determines how fast the algorithm will converge. With much of this being justified by choosing \(k\) very large (at least as Kouznetsov talking about it).

You're doing something similar, but a little different. As you aren't looking for the same kind of thing per se. There are theories of complex dynamics which talk about this, I believe Milnor has a brief moment where he talks about it in his graduate text book; I believe it's only briefly though as a "speed up the calculations" observation. I'll double check. This is mostly a Kouznetsov thing, but there are dynamics results (Milnor uses it for Abel functions if I'm not mistaken...).

What you've done is a little different, you've written the first part of the Fourier series:

\[
\Psi^{-1}(\lambda^s\Psi(z)) = a_0 + a_1\lambda^s + a_2 \lambda^{2s} + O(\lambda^{3s})\\
\]

And gone from there, in your super function formula. You can do this arbitrarily though. I'll look for the part in Milnor where he describes speed ups in computation time (it might be the appendix), or I'm outta luck and I don't remember where I read this, lol. Milnor's so much my bible I forget other references, lol.
Reply
#7
Bad news.

I think this method fails huge !

It seems to overshoot ; give to high values.

This is based on interval arguments rather than precise computation.

So it is theoretical But Simple algebra/Geometry.

So im pretty convinced.

—-

It seems that Nice continu iterations or their asymptotics require

1) fixpoint derivative remains or converges to t > 0.

2) Fixpoint derivative > 1 has f ‘’(x) > 0 close to the fixpoint…

Or fixpoint derivative < 1 has f’’ (x) < 0 close to the fixpoint.

Minimum demands for single fixpoint methods  ! 

Ofcourse this is very general .. to general to prove or classify maybe.

But im getting convinced.

Btw the gaussian and 2sinh do not have this problem.
( and probably neither the similar ones )


regards 

tommy1729
Reply
#8
I returned to the simple idea ;

Fractional iterates of 

( 1 + X/p)^p

For large prime p.

Expanded at the real fixpoint.

This avoids the overshoot.

Notice this is a bit of a base Limit in a sense ;

( 1 + X/X ) ^ X = 2 ^X

For p close to X …

regards 

Tommy1729
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  [2sinh] using exp(x) - exp(-3/5 x) tommy1729 2 1,173 06/18/2023, 11:49 PM
Last Post: tommy1729
  [2sinh] exp(x) - exp( - (e-1) x), Low Base Constant (LBC) 1.5056377.. tommy1729 3 1,720 04/30/2023, 01:22 AM
Last Post: tommy1729
  [MSE too] more thoughts on 2sinh tommy1729 1 943 02/26/2023, 08:49 PM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 1,238 12/30/2022, 11:27 PM
Last Post: tommy1729
Question The Etas and Euler Numbers of the 2Sinh Method Catullus 2 1,797 07/18/2022, 10:01 AM
Last Post: Catullus
Question A Limit Involving 2sinh Catullus 0 919 07/17/2022, 06:15 AM
Last Post: Catullus
  tommy's new conjecture/theorem/idea (2022) ?? tommy1729 0 1,037 06/22/2022, 11:49 PM
Last Post: tommy1729
  Revitalizing an old idea : estimated fake sexp'(x) = F3(x) tommy1729 0 1,287 02/27/2022, 10:17 PM
Last Post: tommy1729
  An intuitive idea log*^[n](F(exp*^[n](z0))) tommy1729 0 1,778 03/03/2021, 12:57 AM
Last Post: tommy1729
  tommy's simple solution ln^[n](2sinh^[n+x](z)) tommy1729 1 7,173 01/17/2017, 07:21 AM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)