08/06/2022, 09:56 PM
Im sorry I have no precise answers.
And what I say may be already well known.
But maybe the answer lies in these related links and their sublinks :
https://math.stackexchange.com/questions...e-of-sinhz
https://mathoverflow.net/questions/45608...6765#46765
Then again Gottfried and Sheldon were already there so this might not be so helpful.
But they talk about Gevrey classes and how it should be relevant.
Maybe that is key !?
***
I also want to make the ( annoying ? ) comment that it is weird this idea of asympotics and divergent summation ... we know it is pretty close to linear ...
So Im not really sure that idea is solid.
Also divergent sums and asymtotics are usually ( ALWAYS ?! ) used for analytic continuation BUT here we have radius 0 and even 2 distinct solutions based on the direction we travel.
Also analytic continuation , divergent summation etc assumes the function is equal to its taylor series ...
But WHAT DOES EQUAL TO ITS TAYLOR SERIES MEAN FOR RADIUS 0 ??
Is that even a meaningfull concept ??
Does a radius 0 taylor series even describe/imply a unique function ? Or just a unique type of singularity ?
Most methods, proofs and operators from complex analyis assume an analytic space so this is really an issue tool-wise and proof-wise.
***
What actually makes the derivatives of a half-iterate grow faster than other derivatives of other half-iterates of other functions ??
Do the derivatives of half-iterate of exp(x) - 1 grow faster than those of the half-iterate of sinh(x) ??
Why or why not ?
***
How are the derivates approximating exp(x) - 1 ?
I mean
the 0.9999 iterate of exp(x) - 1 should give derivatives closer to those of exp(x) - 1 ... yet radius remains 0 ...
very confusing.
***
what happens when we compare the taylor of the 0.499 iterate with that of the taylor of the 0.5001 iterate ?
Is the average that of the 0.5 iterate ??
Does the 0.6 iterate divided by the 0.4 iterate have a positive radius ??
still very confused.
***
I thought about associating it with a taylor that has similar growth but nonzero radius.
so if the half -iterate of exp(x) - 1 = sum t_n x^n
then study sum t_n u_n x^n ( possible with nonzero radius).
ofcourse if we can prove t_n u_n with a given u_n has nonzero radius we found a bound on t_n.
I have a theorem that might help ... however it is ofcourse actually meant for analytic functions , so Im unsure if it applies here !!??!!
anyways here it is :
( and this can MAYBE be used to transform the given taylor series in a related taylor series with positive parameters , which kinda resembles fake function theory ideas , in particular if we also manage to get the radius to infinity. Which was the reason I came with the idea/theorem in the first place btw. )
Let f(x) = sum f_n x^n
Let g(x) = sum g_n x^n
then
T(x) = sum f_n g_n x^n
2 pi T(x) = integral from 0 to 2pi of f( sqrt(x) exp(i t) ) g( sqrt(x) exp(- i t) ) dt
...
Making that integral work and practical is another matter , its just a starting idea.
Does the formula hold if f(x) or g(x) have radius 0 and when and iff x goes to infinitesimal or 0 , and if so is that what it means to " equal its taylor series , while having radius 0 " ??
How about perturbation theory ?
should we study exp(x) - v
where v goes to 1 ?
that gives us 2 fixpoints when we go from v = 2 to v = 1 ....
Now many say the radius is the problem , but the perturbation theory IMO suggests one could also say the radius is not the problem but because the lack of closed form or asymptotic for half-iterates is an issue.
studying half-iterates at the two fixpoints of exp(x) - v and taking the limit might not be easier ??
***
another idea is to use different series expansions for the half-iterate of exp(x) - 1.
such as
sum a_n x^(b_n)
for real a_n and real b_n.
***
I also wonder about using the equation
define ( not trying to solve ) : g(g(x)) = exp(x) - 1.
try to solve
g( exp(x) - 1 ) = exp(g(x)) - 1.
where one side plugs in exp(x) - 1 in its taylor and the other takes the exp of the whole taylor.
***
Isnt this thing in the books of dynamics , divergent series or taylor series ?
It seems such a cruxial intuitive question.
Then again the Imo very intuitive idea of fake functions also did not result in much related books found.
I consider this a very important topic !!
kudos and greetings for all the ones into this question !
regards
tommy1729
And what I say may be already well known.
But maybe the answer lies in these related links and their sublinks :
https://math.stackexchange.com/questions...e-of-sinhz
https://mathoverflow.net/questions/45608...6765#46765
Then again Gottfried and Sheldon were already there so this might not be so helpful.
But they talk about Gevrey classes and how it should be relevant.
Maybe that is key !?
***
I also want to make the ( annoying ? ) comment that it is weird this idea of asympotics and divergent summation ... we know it is pretty close to linear ...
So Im not really sure that idea is solid.
Also divergent sums and asymtotics are usually ( ALWAYS ?! ) used for analytic continuation BUT here we have radius 0 and even 2 distinct solutions based on the direction we travel.
Also analytic continuation , divergent summation etc assumes the function is equal to its taylor series ...
But WHAT DOES EQUAL TO ITS TAYLOR SERIES MEAN FOR RADIUS 0 ??
Is that even a meaningfull concept ??
Does a radius 0 taylor series even describe/imply a unique function ? Or just a unique type of singularity ?
Most methods, proofs and operators from complex analyis assume an analytic space so this is really an issue tool-wise and proof-wise.
***
What actually makes the derivatives of a half-iterate grow faster than other derivatives of other half-iterates of other functions ??
Do the derivatives of half-iterate of exp(x) - 1 grow faster than those of the half-iterate of sinh(x) ??
Why or why not ?
***
How are the derivates approximating exp(x) - 1 ?
I mean
the 0.9999 iterate of exp(x) - 1 should give derivatives closer to those of exp(x) - 1 ... yet radius remains 0 ...
very confusing.
***
what happens when we compare the taylor of the 0.499 iterate with that of the taylor of the 0.5001 iterate ?
Is the average that of the 0.5 iterate ??
Does the 0.6 iterate divided by the 0.4 iterate have a positive radius ??
still very confused.
***
I thought about associating it with a taylor that has similar growth but nonzero radius.
so if the half -iterate of exp(x) - 1 = sum t_n x^n
then study sum t_n u_n x^n ( possible with nonzero radius).
ofcourse if we can prove t_n u_n with a given u_n has nonzero radius we found a bound on t_n.
I have a theorem that might help ... however it is ofcourse actually meant for analytic functions , so Im unsure if it applies here !!??!!
anyways here it is :
( and this can MAYBE be used to transform the given taylor series in a related taylor series with positive parameters , which kinda resembles fake function theory ideas , in particular if we also manage to get the radius to infinity. Which was the reason I came with the idea/theorem in the first place btw. )
Let f(x) = sum f_n x^n
Let g(x) = sum g_n x^n
then
T(x) = sum f_n g_n x^n
2 pi T(x) = integral from 0 to 2pi of f( sqrt(x) exp(i t) ) g( sqrt(x) exp(- i t) ) dt
...
Making that integral work and practical is another matter , its just a starting idea.
Does the formula hold if f(x) or g(x) have radius 0 and when and iff x goes to infinitesimal or 0 , and if so is that what it means to " equal its taylor series , while having radius 0 " ??
How about perturbation theory ?
should we study exp(x) - v
where v goes to 1 ?
that gives us 2 fixpoints when we go from v = 2 to v = 1 ....
Now many say the radius is the problem , but the perturbation theory IMO suggests one could also say the radius is not the problem but because the lack of closed form or asymptotic for half-iterates is an issue.
studying half-iterates at the two fixpoints of exp(x) - v and taking the limit might not be easier ??
***
another idea is to use different series expansions for the half-iterate of exp(x) - 1.
such as
sum a_n x^(b_n)
for real a_n and real b_n.
***
I also wonder about using the equation
define ( not trying to solve ) : g(g(x)) = exp(x) - 1.
try to solve
g( exp(x) - 1 ) = exp(g(x)) - 1.
where one side plugs in exp(x) - 1 in its taylor and the other takes the exp of the whole taylor.
***
Isnt this thing in the books of dynamics , divergent series or taylor series ?
It seems such a cruxial intuitive question.
Then again the Imo very intuitive idea of fake functions also did not result in much related books found.
I consider this a very important topic !!
kudos and greetings for all the ones into this question !
regards
tommy1729

