Tetration Forum
regular slog - Printable Version

+- Tetration Forum (https://tetrationforum.org)
+-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1)
+--- Forum: Mathematical and General Discussion (https://tetrationforum.org/forumdisplay.php?fid=3)
+--- Thread: regular slog (/showthread.php?tid=70)

Pages: 1 2


RE: regular slog - bo198214 - 03/25/2009

Ansus Wrote:What is \( a \) in this formula?

\( a \) is the fixed point: \( b^a=a \).


RE: regular slog - bo198214 - 03/25/2009

Ansus Wrote:Is this correct:

\(
\text{alpha}[b,x]=\text{Log}\left[\text{Log}\left[-\frac{\text{ProductLog}[-\text{Log}[b]]}{\text{Log}[b]}\right],-\frac{\text{ProductLog}[-\text{Log}[b]]}{\text{Log}[b]}-\text{PowerTower}[b,n,x]\right]-n
\)

I am only familiar with Maple and Sage, so I can not help you with this. However in Maple the formula \( \frac{W(-\ln(b))}{-\ln(b)} \) works.


RE: regular slog - bo198214 - 03/26/2009

Ansus Wrote:\(
a=-\frac{W(-\ln b)}{\ln b}
\)

Maybe you have to specify the proper branch. (But as I told I can not test because I dont have Mathematica available.)


RE: regular slog - bo198214 - 03/26/2009

Ansus Wrote:Anyway with any value of a I cannot get anything close to what expected.

What shall I say? It worked for me.
For base \( \sqrt{2} \) the fixed point is \( 2 \), thatswhy this base is so preferred, you dont need to compute the fixed point seperately.


RE: regular slog - bo198214 - 03/27/2009

Ansus Wrote:Great! Now it works, but only for a limited range of bases. Particularly it works for the base \( \sqrt{2} \). I used this formula:


\( \operatorname{slog}_b(x)=\lim_{n\to\infty} \frac{\ln \left(\frac{\frac{W(-\ln b )}{\ln b}+\exp_b^{[n]}(x)}{\frac{W(-\ln b)}{\ln b}+\exp_b^{[n]}(1)}\right)}{\ln \ln \left(\frac{W(-\ln b)}{-\ln b}\right)} \)

I've added this formula to our wiki page:
http://en.wikipedia.org/wiki/Talk:Tetration/Summary#Evaluation_methods

Smile


RE: powerseries of regular slog - andydude - 04/02/2009

bo198214 Wrote:The Abel function has also a singularity at 0.
Just realized, this is only if the fixed point is 0.

bo198214 Wrote:\( FS=cS \)
This should be \( SF=cS \), which means you can't simplify the matrix like you did. The formula you give is a matrix representation of \( \sigma(f(x)) = \sigma(cx) \) if those are Bell matrices, or \( f(\sigma(x)) = c\sigma(x) \) if those are Carleman matrices.

Andrew Robbins


RE: powerseries of regular slog - bo198214 - 04/02/2009

andydude Wrote:
bo198214 Wrote:The Abel function has also a singularity at 0.
Just realized, this is only if the fixed point is 0.
otherwise at the fixed point. The regular iteration theory always assumes the fixed point at 0. If not one just considers the function \( f(x+a)-a \) where \( a \) is the fixed point.

Quote:
bo198214 Wrote:\( FS=cS \)
This should be \( SF=cS \),
actually thats also wrong. However it is only an intermediate error in my derivation.
Lets show the correct equations:
\( \sigma(f(x))=c\sigma(x) \) or, with \( \mu_c(x)=cx \):
\( \sigma\circ f = \mu_c \circ \sigma \)
if we take the Bell matrices:
\( FS=SM \)
where \( M \) is the Bell matrix of \( \mu_c \). This is the diagonal matrix:
\( M=\begin{pmatrix}
c &0 & 0 &\dots &0\\
0 & c^2 & 0&\dots& 0\\
&&\vdots&\\
0 & &\dots& 0 & c^n\\
\end{pmatrix} \)
I think Gottfried calls this the Vandermonde matrix.

The right multiplication of this matrix multiplies each \( k \)-th column with \( c^k \). If we truncate \( S \) to its first column \( \vec{\sigma} \) we get hence:
\( F\vec{\sigma} = c\vec{\sigma} \)
and this can then be transformed to:
\( (F-cI)\vec{\sigma}=0 \)
which I used for my further derivations.


RE: regular slog - Gottfried - 07/29/2009

(10/07/2007, 10:30 PM)bo198214 Wrote: Now there is the the so called principal Schroeder function \( \sigma_f \) of a function \( f \) with fixed point 0 with slope \( s:=f'(0) \), \( 0<s<1 \) given by:

\( \sigma_f(x) = \lim_{n\to\infty} \frac{f^{\circ n}(x)}{s^n} \)

This function particularly yields the regular iteration at 0, via \( f^{\circ t}(x)=\sigma^{-1}(s^t\sigma(x)) \).

Sometimes a thing needs a whole life to be recognized...

In the matrix-method I dealt with the eigen-decomposition of the (triangular) dxp_t() -Bellmatrix U_t to satisfy the relation

\( \hspace{24} U_t = W * D * W^{-1 } \)

While the recursion to compute W and W^-1 efficiently is easy and is working well, I did not have a deeper idea about the structure of the columns in W. Now I found, it just agrees with the above formula:

\( \hspace{24} W = \lim_{h\to\infty} {U_t}^h * diag({U_t}^h)^{-1} \)

which is exactly the above formula; we even can write this, if we refer to the second column of U_t^h as F°h, the second column of W as S, and s = F[1] while F°h[1] = F[1]^h =s^h , then we have

\( \hspace{24} S = \lim_{h\to\infty} \frac {F^{\circ h}}{s^h} \)

Something *very* stupid ... <sigh>
But, well, now also this detail is explained for me.

<Hmmm I don't know why the forum software merges my two replies (to two previous posts of Henryk) into one So here is the second post>




(04/02/2009, 02:31 PM)bo198214 Wrote: This is the diagonal matrix:
\( M=\begin{pmatrix}
c &0 & 0 &\dots &0\\
0 & c^2 & 0&\dots& 0\\
&&\vdots&\\
0 & &\dots& 0 & c^n\\
\end{pmatrix} \)
I think Gottfried calls this the Vandermonde matrix.
Not exactly. I call the Vandermonde-*matrix* VZ (and ZV=VZ~) the *collection* of consecutive vandermonde V(x)-vectors

´ VZ = [V(0), V(1) , V(2), V(3), ...] \\ Vandermondematrix

Your M is just c*dV( c) in my notation: the vandermondevector V( c) used as diagonalmatrix (and since the first entry is not c^0 I noted the additional factor c)

Gottfried


RE: regular slog - bo198214 - 07/31/2009

(07/29/2009, 11:07 AM)Gottfried Wrote: Hmmm I don't know why the forum software merges my two replies (to two previous posts of Henryk) into one

I hopefully switched this behaviour off now.