Tetra-series
#1
I have another strange, but again very basic result for the alternating series of powertowers of increasing height (I call it Tetra-series, see also my first conjecture at alternating tetra-series )

Assume a base "b", and then the alternating series

Code:
.    
  Sb(x) = x - b^x + b^b^x - b^b^b^x +... - ...

and for a single term, with h for the integer height (which may also be negative)
Code:
.
  Tb(x,h) = b^b^b^...^x     \\ b occurs h-times

which -if h is negative- actually means (where lb(x) = log(x)/log(b) )
Code:
.
  Tb(x,-h) = lb(lb(...(lb(x))...)   \\ lb occurs h-times

-------------------------------------------------------

My first result was, that these series have "small" values and can be summed even if b>e^(1/e) (which is not possible with conventional summation methods). For the usual convergent case e^(-e)<b<e^(1/e) the results can be checked by Euler-summation and they agree perfectly with the results obtained by my matrix-method.(see image below)


Code:
matrix-notation
Sb(x) = (V(x)~ * (I - Bb + Bb^2 - Bb^3 + ... - ...)  )[,1]
       = (V(x)~ * (I + Bb)^-1 )   [,1]
       =  V(x)~ * Mb[,1]                \\ (at least) for all b>1
       = sum r=0..inf  x^r * mb[r,1]  

serial notation
       = sum h=0..inf  (-1)^h* Tb(x,h)  \\ only possible for e^(-e) < b < e^(1/e)
                                        \\ Euler-summation required


-------------------------------------------------------


Now if I extend the series Sb(x) to the left, using lb(x) = log(x)/log(b) for log(x) to base b, then define

Code:
.
   Rb(x) = x - lb(x) + lb(lb(x) - lb(lb(lb(x))) +... - ...

This may be computed by the analoguous formula above to that for Mb from the inverse of Bb:
Code:
.
   Lb = (I + Bb^-1)^-1

I get for the sum of both by my matrix-method
Code:
.
  Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]
                = V(x)~ * (Mb + Lb)[,1]
                = V(x)~ *    I [,1]
                = V(x)~ *   [0,1,0,0,...]~  
                = x

  Sb(x) + Rb(x) = x
or, and this looks even more strange (but even more basic)

Code:
.
  0 = ... lb(lb(x)) - lb(x) + x - b^x + b^b^x - ... + ...

x cannot assume the value 1, 0 or any integral height of the powertower b^b^b... since at a certain position we have then a term lb(0), which introduces a singularity.


Using the Tb()-notation for shortness, then the result is

\( \hspace{24}
0 = \sum_{h=-\infty}^{+\infty} T_b(x,h)
\)

and is a very interesting one for any tetration-dedicated...

Gottfried
-------------------------------------------------------

An older plot; I used AS(s) with x=1,s=b for Sb(x) there.
(a bigger one AS

   
Gottfried Helms, Kassel
Reply
#2
Have you tried computing or plotting \( AS(x^{1/x}) \) or \( AS(x)^{1/AS(x)} \) yet? I would do this but I don't have any code yet for AS(x), and I'm lazy.

Andrew Robbins
Reply
#3
Gottfried Wrote:I get for the sum of both by my matrix-method
Code:
.
  Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]
                = V(x)~ * (Mb + Lb)[,1]
                = V(x)~ *    I [,1]
                = V(x)~ *   [0,1,0,0,...]~  
                = x

  Sb(x) + Rb(x) = x

This is what I have the most trouble understanding. First what is your [,1] notation mean? I understand "~" is transpose, and that Bb is the Bell matrix \( Bb = B_x[s^x] \). Second, what I can't see, or is not obvious to me at least, is why:

\( (I + Bb^{-1})^{-1} + (I + Bb)^{-1} = I \)

Is there any reason why this should be so? Can this be proven?

Wait, I just implemented it in Mathematica, and you're right! (as right as can be without a complete proof). Cool! This may just be the single most bizarre theorem in the theory of tetration and/or divergent series.

Andrew Robbins
Reply
#4
andydude Wrote:
Gottfried Wrote:I get for the sum of both by my matrix-method
Code:
.
  Sb(x) + Rb(x) = V(x)~ *Mb[,1] + V(x)~ * Lb[,1]
                = V(x)~ * (Mb + Lb)[,1]
                = V(x)~ *    I [,1]
                = V(x)~ *   [0,1,0,0,...]~  
                = x

  Sb(x) + Rb(x) = x

This is what I have the most trouble understanding. First what is your [,1] notation mean? I understand "~" is transpose, and that Bb is the Bell matrix \( Bb = B_x[s^x] \). Second, what I can't see, or is not obvious to me at least, is why:

\( (I + Bb^{-1})^{-1} + (I + Bb)^{-1} = I \)

Is there any reason why this should be so? Can this be proven?

Wait, I just implemented it in Mathematica, and you're right! (as right as can be without a complete proof). Cool! This may just be the single most bizarre theorem in the theory of tetration and/or divergent series.

Andrew Robbins

Hi Andrew -
first: I appreciate your excitement! Yepp! :-)

second:
(The notation B[,1] refers to the second column of a matrix B)

Yes, I just posed the question, whether (I+B)^-1 + (I+B^-1)^-1 = I in the sci.math- newsgroup. But the proof for finite dimension is simple.

You need only factor out B or B^-1 in one of the expressions.
Say C = B^-1 for brevity
Code:
.
   (I + B)^-1 + (I + C)^-1  
= (I + B)^-1 + (CB + C)^-1
= (I + B)^-1 + (C(B + I))^-1
= (I + B)^-1 + (B + I)^-1*C^-1
= (I + B)^-1 + (B + I)^-1*B
= (I + B)^-1 *(I + B)
= I

As long as we deal with truncations of the infinite B and these are well conditioned we can see this identity in Pari or Mathematica with good approximation.

However, B^-1 in the infinite case is usually not defined, since it implies the inversion of the vandermonde matrix, which is not possible.

On the other hand, for infinite lower *triangular* matrices a reciprocal is defined.


The good news is now, that B can be factored into two triangular matrices, like

B = S2 * P~

where P is the pascal-matrix, S2 contains the stirling-numbers of 2'nd kind, similarity-scaled by factorials

S2 = dF^-1 * Stirling2 * dF
(dF is the diagonal of factorials diag(0!,1!,2!,...) )

Then, formally, B^-1 can be written

B^-1 = P~^-1 *S2^-1 = P~^-1 * S1
(where S1 contains the stirling-numbers of 1'st kind, analoguously factorial rescaled, and S1 = S2^-1 even in the infinite case)

B^-1 cannot be computed explicitely due to divergent sums for all entries (rows of P~^-1 by columns of S1), and thus is not defined.

However, in the above formulae for finite matrices we may rewrite C in terms of its factors P and S1, and deal with that decomposition-factors only and arrive at the desired result (I've not done this yet, pure lazyness...)

third:
This suggests immediately new proofs for some subjects I've already dealt with, namely all functions, which are expressed by matrix-operators and infinite series of these matrix-operators.
For instance, I derived the ETA-matrix (containing the values for the alternating zeta-function at negative exponents) from the matrix-expression
Code:
.
ETA = (P^0 - P^1 + P^2 ....)
     = (I + P)^-1
If I add the similar expression for the inverses of P I arrive at a new proof for the fact, that each eta(2k) must equal 0 for k>0.

Yes- this is a very beautiful and far-reaching fact, I think ...

Gottfried
Gottfried Helms, Kassel
Reply
#5
hej gottfried,

iI would sincerely like to understand more about these matrixes, i have a feeling its important, but I could not find in You texts what is I-i suppose it is identityu matrix, but how does it look like?

Best regards,

Ivars
Reply
#6
Ivars Wrote:hej gottfried,

iI would sincerely like to understand more about these matrixes, i have a feeling its important, but I could not find in You texts what is I-i suppose it is identityu matrix, but how does it look like?

Best regards,

Ivars

Hi Ivars -
just the matrix containing 1 on its diagonal. So multiplying by it doesn't change a matrix, like multiplication by 1 does not change the multiplicand.

Gottfried
Gottfried Helms, Kassel
Reply
#7
Gottfried Wrote:Yes- this is a very beautiful and far-reaching fact, I think ...
I've just received an answer in the newsgroup sci.math by Prof G.A.Edgar who states a numerical discrepancy between my matrix-based conjecture and termwise evaluation of the series.

I cannot resolve the problem completely - the problem doesn't affect the Mb-matrix related conjectures (also of earlier date) but the problem of representation of the alternating series of powers of the reciprocal of Bb by the analoguous expression. I don't have an idea currently, how to cure this and how to correctly adapt my conjecture. So - sigh - I have to retract it for the moment.

[update] I should mention, that this concerns only the Bb-matrix, which is not simply invertible. The application of the idea of the formula to other matrix-operators may be still valid; especially for triangular matrices like P the observation is still valid; I assume, it is also valid for the U-iteration x->exp(x)-1 , since the matrix-operator is the triangular Stirling-matrix. I'll check that today [/update]

[update2] The problem occurs also with the U-iteration and its series of negative heights. Looks like the reciprocal matrix needs some more consideration [/update2]

Gottfried

P.s. I'll add the conversation here later as an attachment.

[update3] A graph which shows perfect match between serial and matrix-method-summation for the Tb-series; and periodic differences between the Rb-series [/update3]
   
Gottfried Helms, Kassel
Reply
#8
Ok, so the identity:
\( (I + Bb^{-1})^{-1} + (I + Bb)^{-1} = I \)
holds true for all matrices, not just the Bell matrix of exponentials.

Good to know.

Andrew Robbins
Reply
#9
andydude Wrote:Ok, so the identity:
\( (I + Bb^{-1})^{-1} + (I + Bb)^{-1} = I \)
holds true for all matrices, not just the Bell matrix of exponentials.

Good to know.

Andrew Robbins

Regularly invertible matrices (for instance of finite size); and I think, that even infinite matrices can be included, if they are triangular (row- or column-finite) or, if not triangular, at least if some other condition holds (on their eigenvalues or the like). I'll have to perform some more tests...
Gottfried Helms, Kassel
Reply
#10
Gottfried Wrote:You need only factor out B or B^-1 in one of the expressions.
Say C = B^-1 for brevity
Code:
.
   (I + B)^-1 + (I + C)^-1  
= (I + B)^-1 + (CB + C)^-1
= (I + B)^-1 + (C(B + I))^-1
= (I + B)^-1 + (B + I)^-1*C^-1
= (I + B)^-1 + (B + I)^-1*B
= (I + B)^-1 *(I + B)
= I

This completes the proof in my view. Good job.

Andrew Robbins
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 7,176 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 5,456 02/22/2023, 08:58 PM
Last Post: tommy1729
Question Is the Tetra-Euler Number Rational? Catullus 1 662 07/17/2022, 06:37 AM
Last Post: JmsNxn
Question Tetration Asymptotic Series Catullus 18 4,233 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Natural Properties of the Tetra-Euler Number Catullus 6 1,703 07/01/2022, 08:16 AM
Last Post: Catullus
Question Formula for the Taylor Series for Tetration Catullus 8 3,258 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 1,079 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 1,012 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 2,190 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 6,334 03/21/2020, 08:28 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)