Posts: 903
Threads: 130
Joined: Aug 2007
09/17/2009, 09:49 AM
(This post was last modified: 09/17/2009, 09:50 AM by Gottfried.)
(09/17/2009, 09:08 AM)bo198214 Wrote: (09/17/2009, 07:43 AM)Gottfried Wrote: If we have a base 1<b<=exp(exp(-1)) the progression of consecutive f°k(0) approaches zero, because the values approach a limit, so the inner sums should be convergent. Not sure whether I understand you. Where does the iteration of f occur in the inner sum? ... here:
\(
\sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k}
\)
there is the f°(n-1)(0) in the inner sum, where n is the running index
--- well, upps, or was possibly the n-1'th derivative meant??
Gottfried
Gottfried Helms, Kassel
Posts: 368
Threads: 44
Joined: Sep 2009
09/17/2009, 10:52 AM
(This post was last modified: 09/17/2009, 10:56 AM by mike3.)
(09/17/2009, 09:49 AM)Gottfried Wrote: ... here:
\(
\sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k}
\)
there is the f°(n-1)(0) in the inner sum, where n is the running index
--- well, upps, or was possibly the n-1'th derivative meant??
It means derivative. \( f^{(n)}(x) \) is a standard notation for the nth derivative, while \( f^n(x) \) is iteration (except for trig functions with positive (and integer?) superscript, in which case it refers to exponentiation of the function, apparently due to historical reasons.).
Posts: 1,630
Threads: 106
Joined: Aug 2007
09/17/2009, 11:21 AM
(This post was last modified: 09/17/2009, 11:22 AM by bo198214.)
(09/17/2009, 10:52 AM)mike3 Wrote: while \( f^n(x) \) is iteration (except for trig functions with positive (and integer?) superscript, in which case it refers to exponentiation of the function, apparently due to historical reasons.).
Not really for historical reasons. Whenever you write \( X^n \) you mean \( X \) multiplied \( n \) times. However it is not always clear what the multiplication is.
For sets it can perhaps be the cross product, or elementwise multiplication. For functions it can be composition or multiplication. As multiplication is more common than composition, I always would mark a compositional power differently, for example \( f^{\circ n} \) (which is used by Ecalle and in modern texts about complex dynamics) or \( f^{[n]} \) or \( f^{\langle x\rangle} \) (which is imho rather used in mathematical physics and general dynamic systems).
And it seems that Gottfried took the mark \( f^{(n)} \) as compositional power, perhaps because it is similarly round like \( f^{\circ n} \).
PS: though this discussion is a bit off this thread, there are notation threads to discuss those topics.
Posts: 1,630
Threads: 106
Joined: Aug 2007
(09/17/2009, 12:34 AM)mike3 Wrote: \( b_k = \sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k} \)
actually its not so difficult to see that the sum diverges for functions with finite convergence radius. Set \( f_n = \frac{f^{(n)}(0)}{n!} \) to be the coefficients of the corresponding powerseries. If the function has a finite convergence radius \( r \) then, by the root test \( r=\frac{1}{\limsup_{n\to\infty} \sqrt[n]{f_n} \), there is a subsequence that behaves asymptotically like \( 1/r^n \), roughly \( f_n \sim \frac{1}{r^n} \).
Now we use the asymptotic formula
(09/17/2009, 09:35 AM)bo198214 Wrote: \( |B_{2n}|\sim 4\sqrt{\pi n} \left(\frac{n}{\pi e}\right)^{2n} \)
A bit more sloppy written as
\( |B_{n}|\sim c\sqrt{ n} \left(\frac{n}{d}\right)^{n} \)
and put it into the summands in the beginning:
\( \frac{1}{n r^n} {n \choose k} c\sqrt{ n-k} \left(\frac{n-k}{d}\right)^{n-k} \)
but already
\( \left(\frac{n-k}{rd}\right)^{n-k} \)
goes strongly to infinity. (yes, ignored signs. this is only a very rough, though I think convincing, argumentation not a proper proof.)
Posts: 368
Threads: 44
Joined: Sep 2009
Hmm. But could there be a way to further extend the notion of continuum sum to cover these cases? It seems the Mathematica program was able to numerically solve the equation and generate the "correct" graph (for tetration). So it seems there might be some sort of extension.
Posts: 903
Threads: 130
Joined: Aug 2007
09/17/2009, 10:35 PM
(This post was last modified: 09/17/2009, 11:26 PM by Gottfried.)
Remark: yepp, indeed I misread the (n)-construct in the discussed inner sum not as derivative; sorry. It's sometimes difficult to switch between the consideration of iterates and then of derivatives; sometimes a short remark in the near context, say "// derivatives" may be helpful... <sigh>
The estimate for B_n/n! is about 2*(2pi)^n which is approached early with very good accuracy. Here is the list of ratios B_n/n! * 2*(2pi)^n (only nonzero entries listed) :
Code: 0.500000000000
1.64493406685
-1.08232323371
1.01734306198
-1.00407735620
1.00099457513
-1.00024608655
1.00006124814
-1.00001528226
1.00000381729
-1.00000095396
1.00000023845
-1.00000005961
1.00000001490
-1.00000000373
1.00000000093
-1.00000000023
1.00000000006
-1.00000000001
1.00000000000
-1.00000000000
1.00000000000
This determines the range of convergence for some function, if it is applied to the bernoulli-numbers in the form of the inner sum, depending on their own Taylor/Maclaurin-coefficients and its parameter x.
If - for instance - the function exp(x) is applied, then the Maclaurin-coefficients are all 1. These are cofactored with the bernoulli-numbers divided by factorials: thus the convergence-radius for this function in this application is x<=2*Pi.
If for another function the coefficients grow with some geometric rate, say q, the convergence-radius for that function in this application is x < 2*pi/q
If for another function the Maclaurin-coefficients grow with some hypergeometric rate, then the convergence-radius for that series in this form of application is zero.
Additional remark: for the powerseries of fractional iterates for regular iteration I found in 2007 and 2008[*1,*2], that the growthrate of the maclaurin-coefficients beginning at some finite index k is roughly binomial(n-k,2) (hope I recall right), so asymptotically for n-> inf about n(n-1)/2 =n^2/2-n/2 so their convergence-radius wr to its parameter x is also zero (as originally stated by I.N.Baker , and is also not even Borel-summable, since this grows faster than the factorials)
[update] well, with a second read of this I think one more remark is useful. If we see the above double sum as two-dimensional table (or matrix), where the inner sum runs over a column, then my discussion here concerns only one column (and namely the first one, while the original problem concerns the row-sums and then the column-sum of the rowsums). For the convergence of the whole sum (including the outer sum over all columns) we'll have to extend this consideration about the range of covergence.
[/update].
Gottfried
[*1] see coefficients fractional height 1
[*2] see also coefficients fractional height 2
Gottfried Helms, Kassel
Posts: 368
Threads: 44
Joined: Sep 2009
(09/17/2009, 11:05 PM)Ansus Wrote: The fact is that during the calculation I never applied the sum to a function with limited radius of convergence. At first iteration - to x+1 function, at the second iteration - to the erfi(x). Both have infinite convergence radius. The vertical asymptote emerges only after infinite number of iterations. At any iteration the approximating function has no asymptote.
So you were iterating it symbolically, in which case power series considerations would not apply. But I'm sure at some point the symbolic iteration would have no closed form (run out of special functions), and so could not continue past that point. So to use the formula for more accuracy and at all sorts of bases (incl. the fabled b = 0.04 and other bases between 0 and \( e^{-e} \)) would require an extension of continuum sum that could work on something like a powerseries or similar.
Posts: 368
Threads: 44
Joined: Sep 2009
09/18/2009, 07:26 AM
(This post was last modified: 09/18/2009, 07:26 AM by mike3.)
So then, what sort of method could be used to do high levels of iteration? Obviously it won't work when power series are the thing being iterated upon, at least with conventional summation (not sure about "divergent summation" techniques, but things like Euler summation introduce extra parameters, the value of which I'm not sure how to get. A good testbed might be the coefficients given on http://en.citizendium.org/wiki/Tetration...on_at_zero for the Taylor expansion of the tetrational function to the base e at z = 0, recovered via the Cauchy integral.), because one is trying to go toward a series with limited convergence radius (and Bernoulli sum chokes on that due to the growth rate of the coefficients.).
Posts: 368
Threads: 44
Joined: Sep 2009
Well, I've put together something that uses the same Bernoulli formula but with a divergent summation technique thrown in to see if it could be used to make it converge on something.
I had a thread on the sci.math newsgroup talking about continuum sums (as it is about more general math questions than just "Tetration") and if divergent summation could be used to extend the range of the Bernoulli formula:
http://groups.google.com/group/sci.math/...ee12?hl=en
Using the methods I was told about, I put together this PARI/GP code. For base e and parameters k = 1.4, h = 0.3, I got \( ^{1/2} e \) as approximately 1.646354234 (trunc from series evaluation - residual = 1.6463542340349813076692026966465493046 or series evaluation + residual = 1.6463542341294601977391298318188518176). How does that agree with other methods of tetration? Choosing all the parameters (incl. number of terms in the series) to get good accuracy seems real tricky. Especially for bases less than 1 where complex numbers are used (haven't yet figured out the right settings for the fabled \( 0 < b < e^{-e} \) range, namely b = 0.04). Any ideas on that? Can you squeeze more accuracy out (how about trying b = sqrt(2) and trying to see if it can be gotten to agree w/ the regular iteration to, say, 24 places or not)?
Code: base = exp(1); /* base of tetration */
pdegree = 65; /* number of terms to use in the series minus 1 */
\ps 65 /* should equal pdegree */
dim = pdegree+2;
DR = matrix(dim, dim, r, c, if(c <= r, 1, 0));
PkPow(k,h) = matrix(dim,dim,r,c,\
if(r>=c,\
binomial(r-1,c-1)*h^(r-c) * ((r-1)!/(c-1)!)^(k-1)\
) )
rownorm(A) = {
local(R=A);
for(r=1,matsize(A)[1],
R[r,] = A[r,]/sum(c=1,matsize(A)[2],A[r,c]));
return(R);
}
PkPowSum(k, h) = rownorm(PkPow(k, h)) * DR;
/* Bernoulli number B_n */
bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n));
/* Fill array with sum terms for a coefficient b_k. */
MakeBernTermArray(k, p) = {
local(nmax = dim);
local(v=vector(nmax));
v[1] = 0;
for(n=1, nmax-1, v[n+1] = polcoeff(p, n-1)/n*binomial(n,k)*bernoulli(n-k));
return(v);
}
MakeBernTermColVector(k, p) = mattranspose(MakeBernTermArray(k, p));
/* Divergent sum based continuum sum */
ContSumPoly(p, k, h) = {
local(p2 = 0);
local(PPS = PkPowSum(k, h));
for(k=0,dim-1,p2=p2+((PPS*MakeBernTermColVector(k, p))[dim]*x^k));
return(p2 - subst(p2, x, 0));
}
/* Compute exp of a power series */
exppoly(p) = truncate(base^p);
/* Multiply by log(base)^x */
mullogb(p) = { local(multpoly = truncate(log(base)^x)*p, outp = 0);
for(i=0,pdegree,outp=outp+polcoeff(multpoly,i,x)*x^i);
return(outp); }
/* Compute definite integral from -1 to x of a power series */
defint(p) = { local(integ=intformal(p)); return(integ - subst(integ, x, -1)); }
/* Do one iteration of the formula */
iterate(p, k, h) = defint(mullogb(exppoly(ContSumPoly(p, k, h))));
/* Normalized iteration */
iternorm(p, k, h) = { local(pi=iterate(p, k, h)); return(pi/polcoeff(pi, 0)); }
/* N iterations on an initial guess */
niters(n, p, k, h) = { local(result=p); for(i=1,n,result=iterate(result,k,h)); return(result); }
/* N normalized iterations on an initial guess */
nnormiters(n, p, k, h) = { local(result=p); for(i=1,n,result=iternorm(result,k,h)); return(result); }
/* Use polynomial */
sumpoly(p, evalpoint) = subst(p, x, evalpoint);
/* Compute residual of tetration */
residual(p) = log(sumpoly(p, 0.5))/log(base) - sumpoly(p, -0.5);
/* Real tetration */
tetcrit(X) = sumpoly(tetpoly, X)
realtet(X) = {
if(X < -0.5, return(log(realtet(X+1))/log(base)));
if(X > 0.5, return(base^realtet(X-1)));
return(tetcrit(X));
}
/* To use: run tetpoly = nnormiters(<number of iterations>, <initial guesstimate for series>, <k>, <h>). For
* the test run I used k = 1.4, h = 0.3, 32 iterations, initial guess = 1. residual(tetpoly) gives
* the residual (how well the function solves F(x+1) = exp(F(x))). Then realtet(X) will give the approximation
* of the tetration ^X b using the computed series.
*/
Posts: 1,630
Threads: 106
Joined: Aug 2007
09/21/2009, 02:29 PM
(This post was last modified: 09/22/2009, 02:29 AM by bo198214.)
I never could imagine that this problem was not already considered by mathematicians. Indeed I found a first reference that there is a whole theory about the topic by Norlund. Look here into some free pages of the book
The Calculus of Finite Differences by Louis Melville Milne-Thomson (2000).
EDIT: Actually I see that the book was already published in 1933. You can download a copy here.
|