Tetra-series
#21
Quote: \( AS(x+1) = 1 + x^2 + \frac{1}{2}x^3 + \frac{4}{3}x^4 + \frac{19}{12}x^5 + \cdots \)
Wait... the series above only works for even approximants, I also got:

\( AS(x+1) = -x - x^3 - x^4 - \frac{29}{12}x^5 - \frac{13}{4}x^6 + \cdots \)

for odd approximants, and they can't both be right... what am I doing wrong?
Reply
#22
(10/31/2009, 10:38 AM)andydude Wrote: This is probably an old result. For the function

\( AS(x) = \sum_{k=0}^{\infty} (-1)^k ({}^{k}x) \)

I found using Carleman matrices that

\( AS(x+1) = 1 + x^2 + \frac{1}{2}x^3 + \frac{4}{3}x^4 + \frac{19}{12}x^5 + \cdots \)

is this related to the series above?

Hmmm, don't recognize this coefficients. Would you mind to show more of your computations?
Gottfried Helms, Kassel
Reply
#23
(10/31/2009, 11:01 AM)andydude Wrote:
Quote: \( AS(x+1) = 1 + x^2 + \frac{1}{2}x^3 + \frac{4}{3}x^4 + \frac{19}{12}x^5 + \cdots \)
Wait... the series above only works for even approximants, I also got:

\( AS(x+1) = -x - x^3 - x^4 - \frac{29}{12}x^5 - \frac{13}{4}x^6 + \cdots \)

for odd approximants, and they can't both be right... what am I doing wrong?

Alternating summing of coefficients requires averaging if the sequence to be summed is not converging fast enough. For example, for the coefficients at the linear term the alternating sum of iterates give 1x-1x+1x-1x... for the coefficient in AS and to get 0.5 x here in the limit one must employ cesaro or Euler-summation.

Note, that the same problem strengthens if the sequence of coefficients at some x have also a growthrate (is divergent) Then for each coefficient you need an appropriate order for Cesaro-/Eulersum.

If you use powers of a *triangular* Carleman-matrix X for the iterates, then you can try the geometric series for matrices

AS = I - X + X^2 - X^3 + ... - ... = (I + X)^-1

in many cases and use the coefficients of the second row/column.
If X is not triangular you have a weak chance, that you get a usable approximate, since the eigenvalues of AS may behave nicer than that of X, because for an eigenvalue x>1 the eigenvalue in AS is 1/(1+x) which is smaller than 1, and from a set of increasing eigenvalues (all >=1) as well from one of decreasing eigenvalues (0<all<=1) you get a set of eigenvalues in AS (0<all<1), which makes the associated powerseries for AS(x) nice-behaved.
Gottfried Helms, Kassel
Reply
#24
If I Euler-sum the list of coefficients, which I get with the formula at the superroot-msg for heights h=0..63 , then the resulting powerseries seems to begin with
Code:
0 + 1/2x  -1/2 x^2 + 1/4 x^3  -1/6 x^4 + 5/12 x^5 + ??? + ??? , ... ]
where for the question-marks I needed higher Euler-orders.

Because this is tidy to avoid the Euler-summation at all we can do a trick:
Let S(x,h) be the formal powerseries for the height h for

S(x,h) = (1+x)^(1+x)^...^(1+x) - 1

and S(x) the series for the limit when h-> infinity, then by definition,

AS(x) = S(x,0) - S(x,1) + S(x,2) - ... + // Euler-sum

Since the coefficients of any height converge to that of the S(x)-series I compute the difference D(x,h) = S(x,h) - S(x) and rewrite

AS(x) = D(x,0) - D(x,1) + D(x,2) ... + aeta(0)*S(x)

where aeta(0) is the alternating zeta-series zeta(0) meaning

aeta(0) = 1-1+1-1+1-... = 1/2

Because the coefficients with index k<h in D(x,h) vanish, I get exact rational values in the coefficients of the formal powerseries in AS(x)
Code:
0 * x^0
                     1/2 * x^1
                    -1/2 * x^2
                     1/4 * x^3
                    -1/6 * x^4
                    5/12 * x^5
                  -23/80 * x^6
                  97/720 * x^7
              -1801/3360 * x^8
                619/5040 * x^9
            -4279/15120 * x^10
          106549/151200 * x^11
        2586973/5702400 * x^12
        2111317/1425600 * x^13
   777782953/1037836800 * x^14
  3321778277/4358914560 * x^15
...

In float-numerical display this is
Code:
0 * x^0
    0.500000000000 * x^1
   -0.500000000000 * x^2
    0.250000000000 * x^3
   -0.166666666667 * x^4
    0.416666666667 * x^5
   -0.287500000000 * x^6
    0.134722222222 * x^7
   -0.536011904762 * x^8
    0.122817460317 * x^9
  -0.283002645503 * x^10
   0.704689153439 * x^11
   0.453663895903 * x^12
    1.48100238496 * x^13
   0.749427032266 * x^14
   0.762065470951 * x^15
   -2.02559608779 * x^16
   -4.93868722102 * x^17
   -11.5286692883 * x^18
   -17.6563985780 * x^19
   -24.5338937285 * x^20
   -22.4594923016 * x^21
   -4.19284436502 * x^22
    53.8185412606 * x^23
    176.092085183 * x^24
    405.014519784 * x^25
    772.287054778 * x^26
    1291.34671701 * x^27
    1872.07516409 * x^28
    2213.27210256 * x^29
    1537.71737942 * x^30
   -1795.52581418 * x^31
...
Because the constant term for S(x,h)+1 = 1 its sum is the alternating sum 1-1+1-1... and we should set the constant term in AS(x) to 1/2.

Check, for base x=1/2 I get AS(x) = 0.938253002500
Gottfried Helms, Kassel
Reply
#25
(10/31/2009, 02:40 PM)Gottfried Wrote: Check, for base x=1/2 I get AS(x) = 0.938253002500

Yes, with the average of the two series, I get AS(0.5) = 0.938253.

The way that I got the coefficients is slightly different than your method. I did this:
Let \( \mathbf{B} \) be a matrix defined by \( B_{jk} = \frac{1}{j!} \text{spow}_k^{(j)}(1) \), and let
\(
\begin{tabular}{rl}
f(x)
& = \sum_{k=1}^\infty f_k (x - 1)^k \\
& = \sum_{k=1}^\infty g_k ({}^{k}x) \\
F &= (f_0, f_1, f_2, ...)^T \\
G &= (g_0, g_1, g_2, ...)^T \\
\end{tabular}
\)
then
\( \mathbf{B}.F = G \)
so I thought, if we know G (1, -1, 1, -1, ...), then
\( F = \mathbf{B}^{-1}G \)
and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series.
Reply
#26
(10/31/2009, 09:37 PM)andydude Wrote: \( F = \mathbf{B}^{-1}G \)
and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series.

Hmm, I didn't catch the actual computation, but that's possibly not yet required.
I guess, you need the technique of divergent-summation ; it may be, that the implicite series, which occur by multiplication of \( \mathbf{B}^{-1}G \), have alternating signs, are not converging well, or even are divergent. So you could include the process of Euler-summation.

I found a very nice method to get at least an overview, whether such matrix-product suffers from non-convergence (but which is reparable by Cesaro/Euler-summation).
I've defined a diagonal-matrix dE(o) of coefficients for Euler-summation of order o, which can simply be included in the matrix-product.
Write \( \mathbf{B}^{-1} * dE(o) * G \) where o=1.0 o=1.5 or o=2. With too small o the implicite sums in matrix-multiplication begin to oscillate from some terms (order is "too weak") , for too high o the oscillation is so heavily suppressed, that with dim-number of terms the series is not yet converging. In a matrix-product using dimension dim the number of dim^2 of such sums occur. While likely not all that sums can be handled correctly by that same Euler-vector, for some of them you will see a well approximated result and a general smoothing, making the result matrix-size independent, especially averages between size dim and size dim+1.

In my implementation order o=1 means no change; simply dE(1) = I ; dE(1.7)..dE(2.0) can sum 1-1+1-1... and similar, dE(2.5)..dE(3.0) can sum divergence of type 1-2+4-8+...-... and so on
(details for toy-implementation in Pari/GP see below, I can send you some example scripts if this is more convenient)

Gottfried


Code:
//  Pari/GP
\\ a vector of length dim returning coefficients for Euler-summation of
\\ order "order" (E(1) gives the unit-vector:direct summation
{E(order, dim=n) = local(Eu);
Eu=vector(dim);Eu[1]=order^(dim-1);
for(k=2,dim,Eu[k]=Eu[k-1]-(order-1)^(dim-k+1)*binomial(dim-1,k-2));
Eu=Eu/order^(dim-1);
return(Eu);}

\\ returns this as diagonal-matrix
dE(order,dim=n) = return( matdiagonal(E(order,dim)) )
Gottfried Helms, Kassel
Reply
#27
(10/31/2009, 09:37 PM)andydude Wrote: The way that I got the coefficients is slightly different than your method. I did this:
Let \( \mathbf{B} \) be a matrix defined by \( B_{jk} = \frac{1}{j!} \text{spow}_k^{(j)}(1) \), and let
\(
\begin{tabular}{rl}
f(x)
& = \sum_{k=1}^\infty f_k (x - 1)^k \\
& = \sum_{k=1}^\infty g_k ({}^{k}x) \\
F &= (f_0, f_1, f_2, ...)^T \\
G &= (g_0, g_1, g_2, ...)^T \\
\end{tabular}
\)
then
\( \mathbf{B}.F = G \)
so I thought, if we know G (1, -1, 1, -1, ...), then
\( F = \mathbf{B}^{-1}G \)
and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series.
Ah, now I understand, B is the matrix which transforms the f- into g - coefficients, G is given and F is sought...
B is not triangular here: how do you get the correct entries for its inverse, btw?

But whatever: I use this idea too, frequently.
However in many instances I found in our context of exponentiation and especially iterated exponentiation, that the inverse of some matrix X represents highly divergent series, such that systematically results which are correct using the non-inverted matrix are not correct for the inverse problem using the (naive) inverse of X.
This is especially the case for some matrix X, whose triangular LR-factors have the form of a q-binomial matrix.
Such LR-factors occur by a square matrix X = x_{r,c} = base^(r*c) or X = x_{r,c} = base^(r*c)/r! or the like, and if X shall be inverted by inversion of its triangular factors.
Such matrices X occur for example in the interpolation which I called "exponential polynomial interpolation" for the T-tetration (or sexp)-Bell-matrices. I used that matrix X also in the example for the "false interpolation for logarithm"-discussion. (But I could not yet find a workaround for the occuring inconsistencies with the inverse)

Now I don't see the precise characteristics of your B-matrix so far; I've just to actually construct one and to look into it to be able to say more. Let's see...

Gottfried
Gottfried Helms, Kassel
Reply
#28
(11/01/2009, 07:45 AM)Gottfried Wrote: B is not triangular here: how do you get the correct entries for its inverse, btw?

Right, B is not triangular, but \( B^{-1} \) is triangular.
Reply
#29
O wait... I was wrong
\( F = BG \)
which means
\( B^{-1}F = G \)
sorry.
Reply
#30
If F is all zeroes except for a one somewhere, then that represents an \( x^n \) function. In general for integer n, the G's look like this

\(
\begin{tabular}{rl}
\frac{1}{x^4} &= 5 - 14x + 35({}^{2}x) - \frac{245}{3}({}^{3}x) + \frac{1957}{12}({}^{4}x) + \cdots \\
\frac{1}{x^3} &= 4 - 9x + 19({}^{2}x) - 39({}^{3}x) - \frac{817}{12}({}^{4}x) + \cdots \\
\frac{1}{x^2} &= 3 - 5x + \frac{17}{2}({}^{2}x) - 15({}^{3}x) - \frac{533}{24}({}^{4}x) + \cdots \\
\frac{1}{x} &= 2 - 2x + \frac{5}{2}({}^{2}x) - \frac{11}{3}({}^{3}x) + \frac{35}{8}({}^{4}x) + \cdots \\
1 &= 1({}^{0}x) + 0 \\
x &= 0 + 1({}^{1}x) \\
x^2 &= -1 + x + \frac{3}{2}({}^{2}x) - \frac{2}{3}({}^{3}x) - \frac{5}{24}({}^{4}x) + \cdots \\
x^3 &= -2 + \frac{7}{2}({}^{2}x) - \frac{41}{24}({}^{4}x) + \frac{37}{20}({}^{5}x) + \cdots \\
x^4 &= -3 - 2x + 5({}^{2}x) + 3({}^{3}x) - \frac{37}{12}({}^{4}x) + \cdots
\end{tabular}
\)

The first coefficient seems to have a pattern in it, but this is just because \( g_0 = f(1) - f'(1) = 1 - n \).

Oh, and another weird thing: \( {}^{\infty}x = 0 + {}^{n}x \) when approximated in this way.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 57,155 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 41,493 02/22/2023, 08:58 PM
Last Post: tommy1729
Question Is the Tetra-Euler Number Rational? Catullus 1 3,332 07/17/2022, 06:37 AM
Last Post: JmsNxn
Question Tetration Asymptotic Series Catullus 18 22,281 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Natural Properties of the Tetra-Euler Number Catullus 6 10,048 07/01/2022, 08:16 AM
Last Post: Catullus
Question Formula for the Taylor Series for Tetration Catullus 8 13,690 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 3,720 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 3,548 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 6,843 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 10,762 03/21/2020, 08:28 AM
Last Post: Daniel



Users browsing this thread: 2 Guest(s)