Iterability of exp(x)-1
#31
bo198214 Wrote:Thats something!
Can you explain your modified euler summation?


A disclaimer first: the "modified" Euler-summation, which I'll describe here, is not yet proven to show correct results for divergent summation. I'm currently only using it, when it gives some reasonable results in example problems of related rate of divergence. I hope, I can come up with a better foundation some day.

-----------------------------------------------
A) basics of naive Euler-summation

The basic idea is simple and implements the usual method of Euler-summation via a matrix-method, translated to matrix-speak from the description of, for instance, K. Knopp.

Assume the pascalmatrix P and the matrix-product of V(x)~* P, which implements by its first column the function f(x)=1+x+x^2+...= 1/(1-x)

The c'th column of P provides the coefficients for the c'th derivative of f(x) (with a linear scaling), so the result is
Code:
´
(1)    V(x)~ * P = [f(x),x f'(x)/1!, x^2 f"(x)/2!,...]    [update, the previous was corrected by inserting the powers of x]
or, a more simple representation,

Code:
´(2)
       1/4 V(1/4)~ * P = 1/3*V(1/3)~
       1/3 V(1/3)~ * P = 1/2*V(1/2)~
       1/2 V(1/2)~ * P =     V(1)~
if infinite size of vectors/matrices is assumed.


The last example may serve as key-idea.

If I have to sum a series with sum s, whose terms are described as column vector A, then the sum of the series can be expressed by the vector/matrix-product
Code:
´
(3)     V(1)~ * A = s
Now the key-idea is to expand V(1)~ into a matrix-product
Code:
´
        V(1)~ = 1/2* V(1/2)* P
and apply this to the sum-formula (3) which becomes
Code:
´
            V(1)~      * A = s
    (1/2* V(1/2)~ * P) * A = s
if infinite size is assumed.

Now, using matrix-associativity, I change parentheses
Code:
´
    1/2* V(1/2)~ * (P * A) = s
and compute P*A first. This obviously doesn't change the result, since the matrix product (1/2 * V(1/2) *P ) provides the summing vector V(1)~ (and is itself constructed of convergent sums), but the occurence of partial sums of A is changed so that "I see" an approximation "earlier" - the convergence is said to be "accelerated".

If A itself contains the terms of a powerseries, then the effect can be dramatical, since
Code:
´
   P * V(x) = V(x+1)   // by binomial-theorem
and if A = V(-1)=[1,-1,1,-1,...] I get
Code:
´
   1/2*V(1/2)~ * P * V(-1) = 1/2*V(1/2)~ * V(1+ (-1))
                           = 1/2 V(1/2)~ * V(0)
                           = 1/2 V(1/2)~ * [1,0,0,....]
                           = 1/2
From the sequence of examples (2) it is also obvious, that this holds for powers of P:
Code:
´
  P^q * V(x) = V(q+x)  // by binomial-theorem
and always
Code:
´
  1/(1+q) * V(1/(1+q))~ * P^q = V(1)~
so the first type of Euler-summation of order q is
Code:
´
let    s = V(1)~ * A      // the assumed sum to be determined
then   s = (1/(1+q) * V(1/(1+q))~ * P^q) * A
          =  1/(1+q) * V(1/(1+q))~ * (P^q * A)
and depending on the character of A this is an acceleration of convergence, where if A is of the form A=V(-q), the final result occurs even immediately, since P^q * V(-q)=V(0) = [1,0,0,0,...]

-------------------------------------------------------------------
B) the approach via partial sums

The more general approach uses the concept of convergence of partial sums, so first from A the vector of partial sums B has to be computed; then second one checks, whether the partial sums, or some transforms, converge to a final value.

In matrix-notation we may use the triangular matrix DR
Code:
´
DR = 1 . . .
     1 1 . .
     1 1 1 .
     1 1 1 1 ...
     ...
and compute B = DR * A first before any transformation/acceleration applies to B. In divergent summation we assume now the limit of the partial means (=means of partial sums of B) as the limit of s (if the sequence of partial means of B is converging at all)


(B.1) Hölder-means

The most simple approach is the Hölder-mean, which computes the partial means of B in two steps
Code:
´
   partial-sums of B:  PS = DR * B
   partial-means of B: HM = dZ(1)* PS   // Hölder-means

where dZ(1) = diag(1/1,1/2, 1/3, 1/4,...)
  and dZ(x) = diag( (1/1)^x, (1/2)^x, (1/3)^x, ...)
Actually the selection of dZ() here is to provide rownorming, in the sense, that the row-multiplicator scales the rows to have the sum rowsum=1, or using the function rn(matrix), that
Code:
´
    rn(matrix) * V(1) = V(1)
The result of HM*B is then a column-vector, whose entries show the proceeding of partial means, and one can eyeball the convergence in the display on the screen.

This process of computing of partial means of partial sums in B can be iterated to higher orders, just by repeating the matrix-multiplication
Code:
´
    HM_1 = dZ(1)*DR * B
    HM_2 = dZ(1)*DR * HM_1
    HM_3 = dZ(1)*DR * HM_2
   ...
    HM(order) = (dZ(1)*DR)^order * B

(B.2) Euler-means

However, this method is not powerful enough to sum series of geometric growth. Here, instead of the Hölder-means, again the Euler-transform using powers of P is applied.

if s = V(1)~ * A is assumed, then the Euler-means are in the result-vector
Code:
´
    EM(B) = rn(P) * B = rn(P) * DR * A

Denote the r'th entry in the result
Code:
´
      s_r =  EM(B)[r]      // r: rowindex
then, in the limit, it is
Code:
´
    s = (r->inf)  s_r
    and (r->inf)  |s_(r-1)-s_r| -> 0
but also gives an acceleration of convergence (visible at earlier r) also for series with geometric growth, and again this can be done with powers of P, which thus implements orders of Euler-summation.

Such orders of this type of Euler-summation are then simply
Code:
´
    EM(B,order) = rn(P^order) * B = rn(P^order) * DR * A


C) the variant of Euler-summation

But again - this is only sufficient for series of geometric growth; divergent series of hypergeometric growth will exceed the power of the Euler-means-method of any order, and leave us with an eventually diverging sequence of means in EM(B,order).


In investigating variants of the matrix P I came across the observation, that log(P)= subdiag(1,[1,2,3,4,...]) and studied Pk, where log(Pk)=subdiag(1,[1^k,2^k,3^k,...]) naming them Pk-orders. (Using k=2 we get btw. a scaling of the Laguerre-matrix)

It comes out, that also
Code:
´
  Pk = dF(k) * P * dF(-k)
      where dF(k) = diag(0!,1!, 2!)^k
is simply a factorially similarity scaling of P.

Using Pk intead of P in the Euler-means-formula
Code:
´
    EM_k(B)       = rn(P_k) * B          ( = rn(P_k) * DR * A       )
    EM_k(B,order) = rn(P_k^order) * B    ( = rn(P_k^order) * DR * A )
seems to introduce more powerful acceleration than using (powers of) P only and is my current "variant of Euler-summation".

--------------------------------------------------

However, this summation is not regular. While in the first example of naive Euler-summation it is
Code:
´
  1/(1+q) *V(1/(1+q)) * P^q * A  = V(1)~ * A
and thus a regular summing vector V(1)~ is provided, the same is not true for this variant. We get an exponential-vector instead since
Code:
´
  VE(1/(1+q))~ * Pk^q = VE(1)~  
      
                 where     VE(x) = [1,x/1! , x^2/2!, ...]    if k=2; for higher k the factorials are to be replaced by their k'th powers
for no VE(x) or V(x) on the lhs the rhs will equal the regular summing vector V(1).

The consequence is, that the elements in A, translated via partial sums in B, are not really equally summed, but are additionally weighted, and this may introduce a distortion.


However - a distortion of this type is principally already introduced in the Hölder-means method, and thus does not always prevent a valid result.

The resulting summation-vector, call it S~, in the last row of HM if computed by
Code:
´
    rm(DR)*B = rn(DR)*DR *A = (rn(DR)*DR) * A = HM * A
thus
    HM = dZ(1)* DR * DR  
then
    S_r~ = HM[r,]    // [r,] indicates the r'th row
then S_r~ * A gives the current Hölder-approximation for r terms.

Here S_r contains decreasing weights for the elements of A, which seems forbidden for a regular summation. K. Knopp mentions this problem explicitely, but states then "in the limit case this effect loses its influence and can thus be neglected".


This Euler-sum variant with its implicite factorial scaling reminds to the Borel-summation, which also introduces exponential-series VE(x) for averaging the partial sums with implicite weighting of A, but assumes x->inf, which I cannot implement.

Surpringly this method works fine for summation of some usual hypergeometric series like 0! - 1! + 2! - 3! +- ... or the series of bernoulli-numbers and others, if k is selected appropriately.

In the current case of f°(1/2)(x) where f°1(x)=exp(x)-1 we have apparently a growth of terms in the powerseries of something like exp(j^2) (**) (where j indicates the index) after a minimum at a certain j. Thus I use Pk with k=1.4 to get the best result (which supposedly catches the hypergeometric effect of exp(j^2) (**)), and I get a continuously improving approximation of the partial sums to the last displayed value at s_96 (using 96 terms)

--------------------------

Well - it is high time to analyze the theory in more depth and to determine the conditions and the range of applicability, but... it seems to be impossible to introduce a deeper discussion in the math-groups to which I'm subscribed, so this remains all home-brewn so far.

Gottfried

(**) [update] the earlier description was wrongly including a factorial which I removed here
Gottfried Helms, Kassel
#32
I should add a more focused remark.
Rereading K.Knopp's chapter about divergent summation it is better to relate it to the Riesz-method using generalized means (intead of referring to it as "modifed Euler-summation")
With the "summation of k'th order" k, the j'th terms of B (the partial sums) are here weighted with weights w_j such that

\( \lim_{n->\infty} s_n = \frac{w_0*b_0 + w_1*b_1 + w_2*b_2 + ....w_n*b_n}{w_0+w_1+w_2+...w_n} \)

where \( w_j = \frac{binomial(n,j)}{j!^k} \) (whether this makes sense or not... Shy )

I hope, this display makes it more clear.

Gottfried
Gottfried Helms, Kassel




Users browsing this thread: 1 Guest(s)