Bold and Disappointing Experiment With Hermite Polynomials!!!
#1
Hi.

Long time no see.

I have recently come back to try another whack at tetration after such a long time away from the subject, this time going for the bold gold -- an explicit, analytic formula for the tetrational to base \( e \), that is, the Kneser tetrational.

I dusted off the Hermite polynomial "gadget" that was mentioned here:

with a long and wild and crazy idea to ... drumroll ... attempt to actually solve the continuum sum analytically to obtain an explicit series formula for the tetrational function. Unfortunately, the results from this latest endeavor look to be disappointing, but they also make one wonder about the nature of the continuum sum formula itself, and in particular, possible (non-)uniqueness considerations regarding the solution.

To recap. First off, a long time ago, for those who have not heard it, a poster that used to be here called "Ansus" suggested that tetration could be expressed using the following weird formula:

\( \frac{\mathrm{tet}'(z)}{\mathrm{tet}'(0)} = \exp \left(\sum_{n=0}^{z-1} \mathrm{tet}(n)\right) \).

This is called "weird" because the right hand side has a sum whose upper bound may not be an integer. The key to using the formula, then, is to suitably generalize the summation operator so as to admit a non-integer upper bound, in a suitably "natural" way. The idea here being that it is easier to generalize summation than to generalize iterated functions, as summation is as a neater, more well-behaved operation.

Now, this author fell in love with this formula at the drop of a hat. This author had experimented with the idea of this kind of "generalization of sum", which he now calls "Continuum sum", before, and got results, and so this could be a very interesting method.

As a way to illustrate the possibilities, consider the well-known formula

\( \sum_{n=0}^{N-1} n = \frac{N(N-1)}{2} \).

Clearly, there is nothing stopping one from plugging \( N \) a non-integer into the right. This is a simple generalization. For example, if we take \( N = \frac{1}{2} \), we get \( \sum_{n=0}^{-1/2}\ n = \frac{\frac{1}{2}\left(\frac{1}{2} - 1\right)}{2} = -\frac{1}{8} \). One can even plug complex numbers, e.g. \( N = i \) gives a "sum" of \( -\frac{1}{2} - \frac{1}{2}i \).

One can also do similar to generalize sums of powers \( n^2 \), \( n^3 \), and so forth -- one them obtains a classic result known as Faulhaber's formula. One might then be led to try and use this to find continuum sums of highly general functions by applying it to a power series expansion, like

\( f(z) = \sum_{n=0}^{\infty} a_n z^n \).

The trouble is, however, that if one does this, for many analytic functions the resulting continuum sums do not converge. In particular, if the analytic function has any singularities on the complex plane, or it is entire but with suitably fast growth (much slower than tetration), there will be no convergence.

This led the author to play around with various "divergent summation" methods, also not to much avail since many of them do not permit nice, combinatorial formulae for the series, at best allowing only tantalizing numerical results.

But recently, this method of Hermite polynomials was discovered. Instead of representing a function as a power series, it can be represented as a Hermite polynomial series:

\( f(z) = \sum_{n=0}^{\infty} a_n H_n(z) \)

(see: http://mathworld.wolfram.com/HermitePolynomial.html)


if it is suitably bounded on the real axis. Now tetration is unbounded and singular there, so that won't work. But if we take as hypothesis that the "right" tetrational should be the Kneser tetrational, then it stands to reason it is limited along the imaginary axis, and the formula we would want to use would be

\( f(z) = \sum_{n=0}^{\infty} a_n H_n(-iz) \).

Taking the results from the given link -- which I will not reproduce here for brevity -- we can define a continuum sum for this as

\( \sum_{n=0}^{z-1} f(n) = a_0 z + \sum_{k=1}^{\infty} \left(\sum_{n=1}^{\infty} a_n \binom{n}{k-1} \frac{B_{n-k+1}}{k} (-2i)^{n-k}\right) H_k(-iz) \)

where \( B_n \) are Bernoulli numbers.

So now we return to the continuum sum tetration formula:

\( \frac{\mathrm{tet}'(z)}{\mathrm{tet}'(0)} = \exp \left(\sum_{n=0}^{z-1} \mathrm{tet}(n)\right) \)

The exponential is a big problem -- that's a nasty operation on a power series that makes the coefficients into Bell polynomials of the original coefficients, and I have no idea how to work it out for a Hermite series. So to avoid this difficulty we take logs:

\( \log\left(\frac{\mathrm{tet}'(z)}{\mathrm{tet}'(0)}\right) = \sum_{n=0}^{z-1} \mathrm{tet}(n) \).

\( \log(\mathrm{tet}'(z)) - \log(\mathrm{tet}'(0)) = \sum_{n=0}^{z-1} \mathrm{tet}(n) \).

Then differentiate once to clear the log:

\( \frac{\mathrm{tet}''(z)}{\mathrm{tet}'(z)} = \frac{d}{dz} \sum_{n=0}^{z-1} \mathrm{tet}(n) \)

\( \mathrm{tet}''(z) = \mathrm{tet}'(z) \frac{d}{dz} \sum_{n=0}^{z-1} \mathrm{tet}(n) \)

Now we have to work out what each side is in terms of unknown coefficients \( a_n \) for which

\( \mathrm{tet}(z) = \sum_{n=0}^{\infty} a_n H_n(-iz) \).

First, the continuum sum:

\( \sum_{n=0}^{z-1} \mathrm{tet}(n) = a_0 z + \sum_{k=1}^{\infty} \left(\sum_{n=1}^{\infty} a_n \binom{n}{k-1} \frac{B_{n-k+1}}{k} (-2i)^{n-k}\right) H_k(-iz) \).

Now we differentiate once. To do this is relatively simple -- the formula for the derivative of a Hermite polynomial is \( H'_n(z) = 2n H_{n-1}(z) \), so by the chain rule the suitable derivative for the one with \( -iz \) is \( -2in H_{n-1}(-iz) \) and we thus get

\(
\begin{align}
\frac{d}{dz} \sum_{n=0}^{z-1} \mathrm{tet}(n) &= a_0 + \sum_{k=1}^{\infty} \left(\sum_{n=1}^{\infty} a_n \binom{n}{k-1} \frac{B_{n-k+1}}{k} (-2i)^{n-k}\right) (-2ik) H_{k-1}(-iz) \\
&= a_0 + \sum_{k=1}^{\infty} \left(\sum_{n=1}^{\infty} a_n \binom{n}{k-1} B_{n-k+1} (-2i)^{n-k+1}\right) H_{k-1}(-iz) \\
&= a_0 + \sum_{k=0}^{\infty} \left(\sum_{n=1}^{\infty} a_n \binom{n}{k} B_{n-k} (-2i)^{n-k}\right) H_k(-iz)
\end{align} \)

We now form the derivative of \( \mathrm{tet}(z) \) itself:

\( \mathrm{tet}'(z) = \sum_{n=1}^{\infty}\ a_n (-2in) H_{n-1}(-iz) = (-2i) \sum_{n=0}^{\infty}\ a_{n+1} (n+1) H_n(-iz) \).

For what it's worth, we might as well go a step ahead and take the second derivative too while we're at it, crackin' away:

\( \mathrm{tet}''(z) = (-4) \sum_{n=1}^{\infty}\ a_{n+2} (n+1)(n+2) H_n(-iz) \).

Now we need to do the Hermite series multiplication. This gets rather damned nasty and UGLY, so to make this easy let us first define

\( C_k = (k+1) a_{k+1} \)

and

\( D_k = \sum_{n=1}^{\infty} a_n \binom{n}{k} B_{n-k} (-2i)^{n-k} \)

so that

\( \mathrm{tet}'(z) = (-2i) \sum_{k=0}^{\infty} C_k H_k(-iz) \)

and

\( \frac{d}{dz} \sum_{n=0}^{z-1} \mathrm{tet}(n) = a_0 + \sum_{k=0}^{\infty} D_k H_k(-iz) \).

Now we multiply those two azz-whuppin series together, to get

\(
\begin{align}
\mathrm{tet}'(z)\ \frac{d}{dz} \sum_{n=0}^{z-1} \mathrm{tet}(n) &= \left[(-2i) \sum_{k=0}^{\infty} C_k H_k(-iz)\right] \left[a_0 + \sum_{k=0}^{\infty} D_k H_k(-iz)\right] \\
&= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{k=0}^{\infty} C_k H_k(-iz)\right)\left(\sum_{k=0}^{\infty} D_k H_k(-iz)\right)\right] \\
&= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{k=0}^{\infty} \sum_{n=0}^{\infty} C_k H_k(-iz) D_n H_n(-iz)\right)\right] \\
&= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{k=0}^{\infty} \sum_{n=0}^{\infty} C_k D_n H_k(-iz) H_n(-iz)\right)\right]
\end{align}
\)

To simplify this further, however, we find we need a formula for the product \( H_k(-iz) H_n(-iz) \) of two Hermite polynomials. This led to some searching, which led to finding this paper:

http://thesis.library.caltech.edu/1861/1...thesis.pdf

Down on page 17, the formula is given, which we translate here as

\( H_k(-iz) H_n(-iz) = \sum_{p=0}^{\min(k, n)} \left[\binom{k}{p} \binom{n}{p} \binom{k+n-2p}{k-p}\right]^{1/2} H_{k+n-2p}(-iz) \).

Thus, we plug that bad boy into the above equation to get

\(
\begin{align}
\mathrm{tet}'(z)\ \frac{d}{dz} \sum_{n=0}^{z-1} \mathrm{tet}(n) &= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{k=0}^{\infty} \sum_{n=0}^{\infty} C_k D_n \left[\sum_{p=0}^{\min(k, n)} \left[\binom{k}{p} \binom{n}{p} \binom{k+n-2p}{k-p}\right]^{1/2} H_{k+n-2p}(-iz)\right]\right)\right] \\
&= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{k=0}^{\infty} \sum_{n=0}^{\infty} \sum_{p=0}^{\min(k, n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{k+n-2p}{k-p}\right]^{1/2} H_{k+n-2p}(-iz)\right)\right] \\
&= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{l=0}^{\infty} \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2} H_{l}(-iz)\right)\right] \\
&= (-2i) \left[a_0 \left(\sum_{k=0}^{\infty}\ C_k H_k(-iz)\right) + \left(\sum_{l=0}^{\infty} \left[\sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] H_{l}(-iz)\right)\right] \\
&= (-2i) \left[\left(\sum_{k=0}^{\infty}\ a_0 C_k H_k(-iz)\right) + \left(\sum_{l=0}^{\infty} \left[\sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] H_{l}(-iz)\right)\right] \\

&= (-2i) \sum_{l=0}^{\infty} \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] H_{l}(-iz) \\
\end{align}
\)

EEGZ! That was a lot of math. Good thing we've got TeX and cut/paste to write it with!

We now set the second-derivative expression equal to this:

\(
\begin{align}
(-4) \sum_{n=1}^{\infty}\ a_{n+2} (n+1)(n+2) H_n(-iz) &= (-2i) \sum_{l=0}^{\infty} \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] H_{l}(-iz)
\end{align}
\)

Then by manipulation, we get

\(
\begin{align}
\sum_{n=1}^{\infty}\ a_{n+2} (n+1)(n+2) H_n(-iz) &= \frac{1}{2}i \sum_{l=0}^{\infty} \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] H_{l}(-iz) \\
\sum_{n=1}^{\infty}\ a_{n+2} (n+1)(n+2) H_n(-iz) &= \sum_{l=0}^{\infty} \frac{1}{2} i \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] H_{l}(-iz) \\
\end{align}
\)

We now equate coefficients to get

\(
\begin{align}
a_{l+2} (l+1)(l+2) &= \frac{1}{2} i \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] \\
a_{l+2} &= \frac{i}{2(l+1)(l+2)} \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} C_k D_n \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right]
\end{align}
\)

and plugging in the expressions for [math]C_k[/math] and [math]D_n[/math] gives

\(
\begin{align}
a_{l+2} &= \frac{i}{2(l+1)(l+2)} \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} (k+1) a_{k+1} \left[\sum_{q=1}^{\infty} a_q \binom{q}{n} B_{q-n} (-2i)^{q-n}\right] \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right] \\
a_{l+2} &= \frac{i}{2(l+1)(l+2)} \left[a_0 C_l + \sum_{k+n-2p = l\\p \le \min(k,n)} \left[\sum_{q=1}^{\infty} (k+1) a_{k+1} a_q \binom{q}{n} B_{q-n} (-2i)^{q-n}\right] \left[\binom{k}{p} \binom{n}{p} \binom{l}{k-p}\right]^{1/2}\right]
\end{align}
\)

... and there, our hope dies, pitifully.

This equation system is a non-linear infinite series equation (note the quadratic order terms \( a_{k+1} a_q \)) for the respective coefficients ... which means that a solution, if any, is likely to be highly non-unique, and there is no easy way to make a closed form or even a recurrence relation for the \( a_l \). Now it may be there is a unique convergent solution, but I would have no idea how to determine if that is the case (theoretically, there should be least one, the Kneser tetrational, but is there more than one?). It might also be possible to impose some regularity condition on the \( a_k \) but again, how we would determine if that would even work at all ... I have no idea.

One possible thought: I am wondering if somehow, the use of the derivative somehow "loses information" as to the solution, which would be another constant only for regular differential equations, but with this continuum-sum thing in there...

Does anyone have any thoughts about all this mess that we just cooked up here?

(Although maybe I made an algebra mistake, which, given all that stuff up there is possible alright, but I don't think it would change the outcome that drastically as to make this solvable. You have the product \( C_k D_n \) for sure, and that screws it all up.)
Reply
#2
I only have two ideas that might make all this a bit easier. One is a simple identity I never saw you use, which looked like it could help your equations out, especially when you take the logarithm and the second derivative.

\( \frac{d}{dz}\sum_{n=0}^{z-1} f(n) = \sum_{n=0}^{z-1}f'(n) \)

This follows very basically because \( \Delta f(z) = f(z) - f(z-1) \) commutes with the derivative, and then quite obviously its inverse must equally commute with the derivative. I use this idea frequently when I'm tasked with solving indefinite sums. I work a lot on this operator, but mostly I only deal with functions of a nice exponential bound in the right half plane (something kneser's tetration definitely is not). 

The second point I have, a few years ago I solved a way of representing continuum sums in a vertical strip of the complex plane. I normalized it, so that it was better written. Essentially if \( \phi(z) \) is holomorphic for \( 0 < \Re(z) < b \) and \( |\phi(z)| < Ce^{\tau|\Im(z)|} \) for some \( \tau < \pi/2 \) then

\( \sum_{j=1}^{z} \phi(j) =\frac{1}{\Gamma(z)}\int_0^\infty x^{z-1}e^{-x}\int_0^xe^{t} f(t)dtdx \)

where

\( f(x) = \frac{1}{2\pi i}\int_{\sigma - i \infty}^{\sigma+i\infty}\Gamma(z)\phi(z)x^{-z}\,dz \)

and

\( \phi(z) = \frac{1}{\Gamma(z)}\int_0^\infty f(x)x^{z-1}\,dx \)

Now \( \phi \) can be kouznetsov's iteration method because it tends to a constant as the imaginary argument grows. I'm not sure how kneser's solution behaves in the complex plane, but maybe you can represent its continuum sum this way.

This leads me to wonder if we don't continuum sum the series term by term, but instead try to solve a functional equation f satisfies when \( \int_0^\infty f(x)x^{z-1}\,dx = \Gamma(z)tet(z) \) using the fact that

\( tet''(z) = tet'(z) \sum_{j=0}^{z-1}tet'(j) \)

I can't continue to discuss what I mean at the moment, but I always had a rough idea on how the equations might work out using this. I'll work more out on paper so that what I say makes more sense, I'm preoccupied at the moment.

I'm floored that you got so far using hermite polynomials though, I would've shied away the moment the power series attempt failed.
Reply
#3
I'm a little dubious about that differentiation identity:

\( \frac{d}{dz} \sum_{n=0}^{z-1} f(n) = \sum_{n=0}^{z-1} f'(n) \)

.

Consider the very simple case \( f(z) = z^2 \). The derivative is \( f'(z) = 2z \). For this simple polynomial we can use Faulhaber's formula and that gives us

\( \sum_{n=0}^{z-1} f'(n) = \sum_{n=0}^{z-1} 2z = z(z-1) = z^2 - z \).

Integrating that, which should give \( \sum_{n=0}^{z-1} n^2 \), gives \( \frac{z^3}{3} - \frac{z^2}{2} \), yet \( \sum_{n=0}^{z-1} n^2 = \frac{z^3}{3} - \frac{z^2}{2} + \frac{z}{6} \), and these differ by a non-constant amount. Likewise, differentiating the latter expression for the sum gives \( \frac{d}{dz} \sum_{n=0}^{z-1}  n^2 = z^2 - z + \frac{1}{6} \ne z^2 - z \).

But looking at this, the derivative of the sum does just have a constant shift, so perhaps what we should really say is

\( \frac{d}{dz} \sum_z f(z) = \sum_z \frac{d}{dz} f(z) \)

up to a constant, which, when integrated, yields a linear term, where this is just indefinite continuum sum, not definite.
Reply
#4
(01/19/2017, 03:14 AM)mike3 Wrote: I'm a little dubious about that differentiation identity:

\( \frac{d}{dz} \sum_{n=0}^{z-1} f(n) = \sum_{n=0}^{z-1} f'(n) \)

.

Consider the very simple case \( f(z) = z^2 \). The derivative is \( f'(z) = 2z \). For this simple polynomial we can use Faulhaber's formula and that gives us

\( \sum_{n=0}^{z-1} f'(n) = \sum_{n=0}^{z-1} 2z = z(z-1) = z^2 - z \).

Integrating that, which should give \( \sum_{n=0}^{z-1} n^2 \), gives \( \frac{z^3}{3} - \frac{z^2}{2} \), yet \( \sum_{n=0}^{z-1} n^2 = \frac{z^3}{3} - \frac{z^2}{2} + \frac{z}{6} \), and these differ by a non-constant amount. Likewise, differentiating the latter expression for the sum gives \( \frac{d}{dz} \sum_{n=0}^{z-1}  n^2 = z^2 - z + \frac{1}{6} \ne z^2 - z \).

But looking at this, the derivative of the sum does just have a constant shift, so perhaps what we should really say is

\( \frac{d}{dz} \sum_z f(z) = \sum_z \frac{d}{dz} f(z) \)

up to a constant, which, when integrated, yields a linear term, where this is just indefinite continuum sum, not definite.
yes yes, yes, I should've been more explicit. I was being too brief. I always tend to just write it \( (\sum_z f(z))' = \sum_z f'(z) + C \) and drop the C because the solution still works. Using your notation made me forget about that little \( C \) and, nonetheless as you can see, it still satisfies the difference equation which was the point I was making. Nonetheless, it does give a much simpler form of your equation

\( tet''(z) = tet'(z)(\sum_z tet'(z) + C) \)

granted we know what \( C \) is.

Plus, when I work with it I tend to work with the exponential indefinite sum, if

\( \sum_z = \sum_{j=-\infty}^z \)

then the constant is zero.

\( \frac{d}{dz}\sum_z = \sum_z\frac{d}{dz} \)

This is just like how if

\( \int = \int_{-\infty}^z \) then

\( \Delta \int = \int \Delta \)

where there is no constant error. Of course this definite sum does not really work in this case though, it's really rather restrictive.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Bell matrices and Bell polynomials Daniel 11 4,917 12/18/2022, 04:41 AM
Last Post: JmsNxn
  Hermite Polynomials mike3 8 21,982 07/08/2014, 12:24 PM
Last Post: tommy1729
  Iterated polynomials JmsNxn 4 14,779 12/16/2010, 09:00 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)