Posts: 20
Threads: 3
Joined: Feb 2014
05/08/2014, 07:39 PM
(This post was last modified: 05/08/2014, 08:11 PM by hixidom.)
Ahhh. Thank you. That makes a lot of sense. I assume b is a constant, then I solve for b as a function of x! I am pretty dense.
Anyways, here's something else I found that may be of use,
Let's say that \( f(x)=e^{g(x)} \).
Then by definition
\( f(f(x))=e^{g(e^{g(x)})}=e^x\\\Rightarrow g(e^{g(x)})=x\\\Rightarrow e^{g(x)}=g^{-1}(x) \)
Now substitute g(x) for x,
\( e^{g(g(x))}=g^{-1}(g(x))=x\\\Rightarrow g(g(x))=ln(x) \)
So, in conclusion, the half-iterate of e^x is the exponential of the half-iterate of ln(x).
Not sure if that is useful but I think it's kinda neat. Hopefully I didn't make another mistake, but it's always possible.
Posts: 93
Threads: 30
Joined: Aug 2016
01/07/2017, 11:00 PM
(This post was last modified: 01/07/2017, 11:02 PM by Xorter.
Edit Reason: wrong tex
)
Hello!
I found an interesting site about the Carleman matrix: https://en.m.wikipedia.org/wiki/Carleman_matrix
\( M[f]_i_,_j = 1/k! [D^k f(0)^j] \)
\( f(x) = \sum_{k=0}^{\infty} M[f]_1,k x^k \)
And the most important of this case:
M[fog] = M[f] M[g]
So these matrices convert composition to matrix multiplication.
Thus
\( M[f^o ^N] = M[f]^N \)
Therefor
\( f^o ^0.5 (x) = \sum_{k=0}^{\infty} sqrt(M[f])_1_,_k x^k \)
So we get the M[exp(x)], it was the easy part of the thing. We need to get the squered root of this matrix, and I could find a program for it: http://calculator.vhex.net/calculator/li...quare-root
And I got another matrix, which satisfies that: \( sqrt(M[exp(x)])_1_,_k = [0.606, 0.606, 0.303 ...]^T \)
So the function is:
\( exp^o ^0.5 (x) ~= 0.606 + 0.606x + 0.303x^2 + 0.101x^3 \)
But it is not the half-iterate of exp(x), Could you help me why not, please? What was my mistake?
Xorter Unizo
Posts: 900
Threads: 130
Joined: Aug 2007
01/08/2017, 01:31 AM
(This post was last modified: 01/15/2017, 10:14 PM by Gottfried.
Edit Reason: added information
)
(01/07/2017, 11:00 PM)Xorter Wrote: Hello!
I found an interesting site about the Carleman matrix: https://en.m.wikipedia.org/wiki/Carleman_matrix
(...)
So these matrices convert composition to matrix multiplication.
Thus
\( M[f^o ^N] = M[f]^N \)
Therefor
\( f^o ^0.5 (x) = \sum_{k=0}^{\infty} sqrt(M[f])_1_,_k x^k \)
So we get the M[exp(x)], it was the easy part of the thing. We need to get the squered root of this matrix, and I could find a program for it: http://calculator.vhex.net/calculator/li...quare-root
And I got another matrix, which satisfies that: \( sqrt(M[exp(x)])_1_,_k = [0.606, 0.606, 0.303 ...]^T \)
So the function is:
\( \exp^o ^{0.5} (x) ~= 0.606 + 0.606x + 0.303x^2 + 0.101x^3 \)
But it is not the half-iterate of exp(x), Could you help me why not, please? What was my mistake?
Well, using the truncated series of the exp(x)-function up to 16 terms (Carleman-matrix-size) I get, using my own routine for matrix-square-root in Pari/GP with arbitrary numerical precision (here 200 decimal digits for internal computation) , the following truncated series-approximation:
\( \exp^{[0.5]}(x) \approx 0.498568472273 + 0.876337510066*x + 0.247418943917*x^2 + 0.0248068936680*x^3 - 0.00112303037149*x^4 \\
+ 0.000361451686885*x^5 + 0.0000337024986252*x^6 - 0.0000517784266699*x^7+ 0.0000259224188256*x^8 - 0.00000189354770473*x^9 \\
- 0.00000360748613972*x^{10} + 0.00000411482178000*x^{11} - 0.00000216221756598*x^{12} + 0.000000558540558241*x^{13} \\
- 0.0000000635173773180*x^{14} + 0.00000000192054352361*x^{15}+ O(x^{16})
\)
This gives, for \( 0 \le x \lt 1 \) eight correct digits when applying this two times (and should approximate Sheldon's Kneser-implementation).
The reason, why your function is badly misshaped might be: matrix is too small (did you only take size 4x4?) and/or the matrix-squareroot-computation is not optimal.
To crosscheck: one simple approach to the matrix-square-root is the "Newton-iteration".
Let M be the original Carleman-matrix and N denote its approximated square-root
initialize ...
\( N=Id \qquad \qquad \text{ /* Identity matrix of some finite dimension dim */} \)
iterate ...
\( N = (M * N^{-1} + N)/2 \).
until convergence .
Unfortunately, the matrix N shall not be "Carleman" unless M were of infinite size; nitpicking this means, the function \( \exp^{0.5}(x) \) with coefficients taken from the second row (or in my version:column) is not really well suited for iteration. (But this problem has not yet been discussed systematically here in the forum, to my best knowledge)
Gottfried
Gottfried Helms, Kassel
Posts: 93
Threads: 30
Joined: Aug 2016
(01/08/2017, 01:31 AM)Gottfried Wrote: Well, using the truncated series of the exp(x)-function up to 16 terms (Carleman-matrix-size) I get, using my own routine for matrix-square-root in Pari/GP with arbitrary numerical precision (here 200 decimal digits for internal computation) , the following truncated series-approximation:
\( \exp^{[0.5]}(x) \approx 0.498568472273 + 0.876337510066*x + 0.247418943917*x^2 + 0.0248068936680*x^3 - 0.00112303037149*x^4 \\
+ 0.000361451686885*x^5 + 0.0000337024986252*x^6 - 0.0000517784266699*x^7+ 0.0000259224188256*x^8 - 0.00000189354770473*x^9 \\
- 0.00000360748613972*x^{10} + 0.00000411482178000*x^{11} - 0.00000216221756598*x^{12} + 0.000000558540558241*x^{13} \\
- 0.0000000635173773180*x^{14} + 0.00000000192054352361*x^{15}+ O(x^{16})
\)
This gives, for \( 0 \le x \lt 1 \) eight correct digits when applying this two times (and should approximate Sheldon's Kneser-implementation).
The reason, why your function is badly misshaped might be: matrix is too small (did you only take size 4x4?) and/or the matrix-squareroot-computation is not optimal.
To crosscheck: one simple approach to the matrix-square-root is the "Newton-iteration".
Let M be the original Carleman-matrix and N denote its approximated square-root
initialize ...
\( N=Id \qquad \qquad \text{ /* Identity matrix of some finite dimension dim */} \)
iterate ...
\( N = (M * N^{-1} + N)/2 \)
until convergence .
Unfortunately, the matrix N shall not be "Carleman" unless M were of infinite size; nitpicking this means, the function \( \exp^{0.5}(x) \) with coefficients taken from the second row (or in my version:column) is not really well suited for iteration. (But this problem has not yet been discussed systematically here in the forum, to my best knowledge)
Gottfried
Did I take size 4x4? No, of course not, It was 20x20 later 84x84.
I made and recognised my mistake: I generate wrong Carleman matrix instead of M[exp(x)]_i,j = i^j/j!.
Now I regenerate the matrix and I got approximately the same solution.
It works, yuppie!
Thank you very much.
Could you tell me what you wrote into pari to calculate it out, please? I am not so good at pari codes.
Xorter Unizo
Posts: 900
Threads: 130
Joined: Aug 2007
01/09/2017, 02:41 AM
(This post was last modified: 01/09/2017, 03:10 AM by Gottfried.)
Here is Pari/GP - code
Code: default(realprecision,200) \\ increase internal precision to 800 digits or higher when matrixsize more than, say, 64 ...
default(format,"g0.12") \\ only 12 digits for display of float numbers
dim=16 \\ increase later, when everything works, but stay less then, say, 128
M = matrix(dim,dim,r,c,(c-1)^(r-1)/(r-1)!) \\ carlemanmatrix, transposed, series-coefficients along a column!
M=1.0*M \\it is better to have float-values in M otherwise the number-of-digits in N explodes over iterations
N = matid(dim)
N = (M * N^-1 + N) / 2
N = (M * N^-1 + N) / 2
N = (M * N^-1 + N) / 2
/* ... do this a couple of times to get convergence; careful: not too often to avoid numerical errors/overflow*/
/* note, N's expected property of being Carleman-type shall be heavily distorted. */
M - N*N \\ check for sanity, the difference should be near zero
/* define the function; */
exp05(x) = sum(k=1,dim, x^k * N[k,2]) \\ only for x in interval with good convergence (0<=x<1 )
/* try, 6 to eight digits might be correct when dim is at least 32 x32 /*
x0 = 0
x05 = exp05(x0) \\ this should be the half-iterate about 0.498692160537
x1 = exp05(x05) \\ this should be the full iterate and equal exp(x0)=1 and is about 1.00012482606
x1 - exp(x0) \\ check error
Example. With dim=8 I got after 8 iterations for N:
Code: N=
1.00000000000 0.498692160537 0.248258284527 0.123313067961 0.0613783517169 0.0309705773951 0.0161518156415 0.00900178983873
0 0.876328584414 0.875668009082 0.651057846300 0.427494354197 0.262472285853 0.155983031832 0.0925625176728
0 0.246718723415 1.01708995680 1.34271949385 1.26030722116 0.991600974872 0.698202964055 0.456331269934
0 0.0248938874134 0.453724180460 1.35543926159 2.03890540259 2.20219490711 1.93455428276 1.45660923676
0 -0.000559114024252 0.101207623371 0.716292108758 1.94548221867 3.15424069454 3.69849457008 3.40094743897
0 0.000132927042876 0.0119615040464 0.219667721568 1.11333664588 2.97146364527 5.09318298067 6.20122030134
0 0.0000114543791108 0.00113904404837 0.0431682924811 0.396012087313 1.78179566161 5.01623097009 9.19538628629
0 -0.00000540376918712 0.000111767358395 0.00661697871499 0.0933915529553 0.665946396103 3.15521910471 11.3987254320
Of course, to make this more flexible for varying fractional powers of M you'll need diagonalization - but then the required "realprecision" becomes exorbitant for dim=32 and more. For reference, I call this method the "polynomial method" because by the matrix being of finite size this is a polynomial approximation and no attempt is done to produce N in a way, that it basically maintains the structure of a Carlemanmatrix when fractional powers are computed. If this is wanted, the conjugacy using the complex fixpoint is needed before the diagonalization and the generation of a power series with complex coefficients to have the famous Schröder-function by the eigenvectors-matrices. After that, Sheldon has the method to proceed backwards to a real-to-real solution after H. Kneser (which seems to be possibly the limit of the above construction when the matrix size goes to infinity).
Gottfried
Gottfried Helms, Kassel
|