regular slog bo198214 Administrator Posts: 1,624 Threads: 103 Joined: Aug 2007 10/07/2007, 10:30 PM (This post was last modified: 10/07/2007, 10:46 PM by bo198214.) Let us determine the regular super logarithm $\text{rslog}_b$ of $b^x$, $11$ as long as we chose a branch of the involved logarithm such that $\log_b^{\circ n}\to a$. For computing the regular super logarithm however we face a major problem with repelling fixed points: we can not compute directly $\alpha_{b,a}({^nb})$, as $\log_b^{\circ n+2}({^nb})=-\infty$. This presents a problem because for the rslog we have to compute $\alpha_{b,a}(1)$ to normalize the values. The good news however is that $\lim_{x\to {^nb}} \alpha_{b,a}(x)$ seems to always exists. So the regular super logarithm is then: $\text{rslog}_{b,a}(x)=\alpha_{b,a}(x)-\lim_{\xi\to 1}\alpha_{b,a}(\xi)$ for $x\neq {^nb},n\in\mathbb{N}_0$ and $\text{rslog}_{b,a}(x)=\lim_{\xi\to x}\alpha_{b,a}(x)-\lim_{\xi\to 1}\alpha_{b,a}(\xi)$ otherwise. Following the idea of Jay to add the regular iteration at conjugate fixed points (and my idea to divide by 2 to get an Abel function again) let us consider $\alpha_{b,a}^\ast(x)=\frac{\alpha_{b,a}(x)+\alpha_{b,\overline{a}}(x)}{2}$ where $a$ is a fixed point in the upper halfplane. Proposition: $\alpha_{b,\overline{a}}(x)=\overline{\alpha_{b,a}(x)}$ for $x\in\mathbb{R}$, $x\neq {^nb}$. Particularly this implies that $\alpha_{b,a}^\ast(x)=\Re(\alpha_{b,a}(x))=\Re(\alpha_{b,\overline{a}}(x))$ Note, that we define $\alpha_{b,a}^\ast$ merely on the real axis, because this is the intersection of the domain of definition of $\alpha_{b,a}$ (upper halfplane) and $\alpha_{b,\overline{a}$ (lower halfplane). Proof: The first question that appears is: Which branch of the logarithm converges to $\overline{a}$. While the usual logarithm is defined to yield imaginary values $-\pi0$). We first verify that $\log(\overline{z})=\overline{\log^\ast(z)}$ and hence $\log(\overline{z})+2\pi i k=\overline{\log^\ast(z)-2\pi i k}$. \begin{align*} \log(\overline{z})&=\log(\overline{x+iy})=\ln(x-iy)\\ &=\log(r(\cos(\varphi)-i\sin(\varphi))) & \text{let} -\pi< -\varphi\le\pi\\ &=\ln( r)+\log(\cos(-\varphi)+\sin(-\varphi)) & -\pi\le \varphi<\pi\\ &=\ln( r)+\log(e^{-i\varphi})=\ln( r)-i\varphi=\overline{\ln( r)+i\varphi}=\overline{\log^\ast(x+iy)}=\overline{\log^\ast(z)}\end{align*}. A further consequence is that $\log_{\overline{c}}(\overline{z})=\overline{\log_c(z)}$. The rest is then easily established, let $a$ be the $k+1$th fixed point in the upper half plane: $\alpha_{b,\overline{a}}(x)=\lim_{n\to\infty} n-\log_{1/\log(\overline{a})}(\overline{a}-\left(\log^\ast_b-\frac{2\pi i k}{\ln(b)}\right)^{\circ n}(x))=\overline{\lim_{n\to\infty} n-\log_{1/\log(a)}\left(\log_b+\frac{2\pi i k}{\ln(b)}\right)^{\circ n}(x))}=\overline{\alpha_{b,a}(x)}$. andydude Long Time Fellow Posts: 510 Threads: 44 Joined: Aug 2007 10/20/2007, 06:02 PM Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this? Andrew Robbins bo198214 Administrator Posts: 1,624 Threads: 103 Joined: Aug 2007 11/02/2007, 07:12 PM andydude Wrote:Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this? In [1] Szekeres defines $f(x)$ being regular (in the case $00,\neq 1$) Let us write this with the Bell matrix ($m$th row contains the coefficients of the $m$-th power of the function) $S$ for $\sigma$ and $F$ for $f$. As $f$ and $\sigma$ dont have a constant/0th coefficient the matrix is correspondingly stripped: $FS=cS$ $(F-cI)S=0$ we only need to consider the first row of $S$ which is $\vec{\sigma}$: $(F-cI)\vec{\sigma}=0$ E.g., truncation to 4: $\begin{pmatrix} c-c & 0 & 0 & 0\\ f_2 & c^2-c & 0 & 0\\ f_3 & {f^2}_3 & c^3-c & 0\\ f_4 & {f^2}_4 & {f^3}_4 & c^4-c \end{pmatrix} \begin{pmatrix} \sigma_1\\\sigma_2\\\sigma_3\\\sigma_2 \end{pmatrix} =\begin{pmatrix}0\\0\\0\\0\end{pmatrix}$ We see that the first row is 0 and needs to be chopped this gives freedom up to a multiplicative constant for $\sigma$ (which is anyway known for Schroeder functions) and we decide to choose $\sigma_1=\pm 1$ depending on whether $c>1$ or $c<1$ and from which side we approach the fixed point. This then leads to the equation with the matrix $F'$, which is $F$ with removed first row and column $F'(\sigma_2,\sigma_3,\dots)^T=\mp(f_2,f_3,\dots)^T$, eg. $\begin{pmatrix} c^2-c & 0 & 0\\ {f^2}_3 & c^3-c & 0\\ {f^2}_4 & {f^3}_4 & c^4-c \end{pmatrix} \begin{pmatrix} \sigma_2\\\sigma_3\\\sigma_2 \end{pmatrix} =-\begin{pmatrix}f_2\\f_3\\f_4\end{pmatrix}$ However we dont need an equation solver to solve this system, because we chopped off the first line and column and not the last line and the first column, as in Andrew's slog; we can solve it by hand: $\sigma_{k} = \left(\pm f_k + \sum_{i=2}^{k-1} {f^i}_k \sigma_i\right)/\left(c-c^k\right)$. Also the solution of this equation system does not depend on the truncation size as it is with the slog. But of course this becomes relativated by needing a fixed point. So we have a formula for the powerseries of the regular Schroeder function. Then the regular Abel function is just $\alpha_f(x)=\log_c(\sigma_f(x))$. Let us apply this to $b^x$. First we have to move the fixed point $a$ to 0 by conjugation: $f(x)=b^{x+a}-a=ab^x-a=ae^{x\ln(b)}-a$ $f$ has the coefficients: $f_k = a\frac{\ln(b)^k}{k!}$, $f_0=0$ $f^n(x)=a^n \sum_{m=0}^n (-1)^{n-m}\left(n\\m\right) e^{x\ln(b)m}$ It has the coefficients ${f^n}_k = a^n \sum_{m=0}^n (-1)^{n-m}\left(n\\m\right)\frac{\ln(b)^k m^k}{k!}=a^n\frac{\ln(b)^k}{k!}\sum_{m=0}^n (-1)^{n-m}\left(n\\m\right)m^k$ So $\sigma_k = \frac{\ln(b)^k}{k!\left(c-c^k\right)}\left(a + \sum_{n=2}^{k-1} a^n \sigma_n\sum_{m=0}^n (-1)^{n-m}\left(n\\m\right)m^k\right)$ where $c=f'(0)=a\ln(b)b^0=\ln(b^a)=\ln(a)$ So the Abel function of $f=\tau_a^{-1}\circ\exp_b\circ\tau_a$ is $\alpha_f(x)=\log_{\ln(a)} (\sigma(x))$ and so the Abel function of $b^x$ is $\alpha(x)=\log_{\ln(a)}(\sigma(x-a))$ However it seems as if the convergence radius of $\sigma$ is just $a$. So you can not use this formula exclusively to plot for example the regular Abel function of $\sqrt{2}^x$ in the range -1 to 1.9, it does not converge at 0. A numeric comparison with the natural slog will hopefully follow later. « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post E^^.5 and Slog(e,.5) Catullus 7 2,083 07/22/2022, 02:20 AM Last Post: MphLee Slog(Exponential Factorial(x)) Catullus 19 5,567 07/13/2022, 02:38 AM Last Post: Catullus Slog(x^^^2) Catullus 1 806 07/10/2022, 04:40 AM Last Post: JmsNxn Slog(e4) Catullus 0 684 06/16/2022, 03:27 AM Last Post: Catullus A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 7,169 03/08/2021, 07:13 PM Last Post: JmsNxn Some slog stuff tommy1729 15 30,675 05/14/2015, 09:25 PM Last Post: tommy1729 Regular iteration using matrix-Jordan-form Gottfried 7 17,951 09/29/2014, 11:39 PM Last Post: Gottfried A limit exercise with Ei and slog. tommy1729 0 4,293 09/09/2014, 08:00 PM Last Post: tommy1729 A system of functional equations for slog(x) ? tommy1729 3 10,177 07/28/2014, 09:16 PM Last Post: tommy1729 slog(superfactorial(x)) = ? tommy1729 3 10,189 06/02/2014, 11:29 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s)