Funny pictures of sexps
#1
I recently compared the complex values of the matrix power super-exponential (msexp) and the intuitive super-expoential (isexp), i.e. the inverse of the intuitive slog.
The graphic method is ideal for powerseries depictions, because powerseries dont converge beyond their convergence radius.
I call it conformal polar plots: they show how a mesh of circles and rays is mapped under the powerseries/function. Images of circles are green, images of rays are red.

The following both pictures show the images of circles and lines inside a radius of 1.5 around the devlopment point 0 under the corresponding sexp to base e. Both are made from 50x50 Carleman matrices.

msexp
   
isexp
   

Its clear that they "feel unpleasant" near the fixed point.
The isexp seems to start crimping near the fixed point, does that mean it is not injective there?
Unfortunately everything can be explained with the limited precision.
Indeed both methods seem to converge very slowly, which would make it perhaps impossible to decide the question numerically.

We can continue the sexp to the left and right via exponentiating, logarithmating; but we can not do in direction of the imaginary axis. So the powerseries development theirby limited. In the above case of development point 0 the convergence radius should be 2 (because there is a singularity at -2 and hopefully nowhere else). However only the msexp can be driven to near 2, while the isexp already before starts to wildly oscillate.

Here is how the msexp extends to radius 1.9:
   

PS: If you add the conjugate complex half, the pictures look like strange pears, thatswhy I called it funny!
Reply
#2
So that made me curious about isexp_e at 0 (from islog_e developed at 1) and I computed it with 100 x100 instead of 50x50 Carleman matrix. However the result didnt change but rather got worse:
isexp_e 50x50:
   
isexp_e 100x100:
   

These are normal conformal plots that show the image of the rectangle [-1,1]x[0,1.40] under isexp_e.
Blue lines: constant imaginary part; orange lines: constant real parts.
The black dot is the fixed point of exp.

So is this an indication of some more singularities of the islog/isexp?
Reply
#3
(08/25/2009, 03:59 PM)bo198214 Wrote: So that made me curious about isexp_e at 0 (from islog_e developed at 1) and I computed it with 100 x100 instead of 50x50 Carleman matrix.
...
So is this an indication of some more singularities of the islog/isexp?
Bo, just to refresh my memory, how are you calculating the isexp_e?
~ Jay Daniel Fox
Reply
#4
(08/25/2009, 04:24 PM)jaydfox Wrote: Bo, just to refresh my memory, how are you calculating the isexp_e?

I compute the powerseries of Andrew's slog at x=1. Then I invert this series.
To be more exact:
I consider the Carleman matrix C of the powerseries development of e^(x+1)-1.
Then I compute what Andrew calls Abel matrix.
I.e. the transpose of the suitably truncated C - I. And solve the equation A p = (1,0,0...)
-1+p(x) is then the islog of e^(x+1)-1 at 0.
And p(x-1) is the islog of e^x at 1.
And p^{-1}(x)+1 is the isexp of e^x at 1.

The development of the slog at 1 instead of the normal 0, which implies sexp developed at 0 instead of the normal -1, should allow a convergence radius of 2 instead of 1 for the sexp. But it seems as if the convergence radius is only 1.4, i.e. roughly the imaginary part of the fixed point.

edit: On the other hand probably the at 1 developed slog is different from the at 0 development slog.
Reply
#5
(08/25/2009, 05:09 PM)bo198214 Wrote:
(08/25/2009, 04:24 PM)jaydfox Wrote: Bo, just to refresh my memory, how are you calculating the isexp_e?

I compute the powerseries of Andrew's slog at x=1. Then I invert this series.
To be more exact:
I consider the Carleman matrix C of the powerseries development of e^(x+1)-1.
Then I compute what Andrew calls Abel matrix.
I.e. the transpose of the suitably truncated C - I. And solve the equation A p = (1,0,0...)
-1+p(x) is then the islog of e^(x+1)-1 at 0.
And p(x-1) is the islog of e^x at 1.
And p^{-1}(x)+1 is the isexp of e^x at 1.

The development of the slog at 1 instead of the normal 0, which implies sexp developed at 0 instead of the normal -1, should allow a convergence radius of 2 instead of 1 for the sexp. But it seems as if the convergence radius is only 1.4, i.e. roughly the imaginary part of the fixed point.

edit: On the other hand probably the at 1 developed slog is different from the at 0 development slog.
Actually, I wanted to confirm something, because it's been a while since I last tried my hand at this issue (recentering).

We have the Abel function, A(x) = A(f(x))+1, correct? Here, f(x) is e^x. To solve, we find A(f(x))-A(x) = 1, and this in turn implies A*(C - I) = 1, where C is the Carleman matrix of f(x), I is the identity matrix, and A is the power series coefficients of the Abel function A, and 1 is the vector [1, 0, ...].

So to recenter, we have to substitute u(x) = x+1, for example. This implies that we solve A(f(u(x))) - A(u(x)) = 1, so A(Cu - Iu) = 1.

So Cu is the the Carleman matrix of e^(x+1), and Iu will be a suitable triangular Pascal matrix, the Carleman matrix of u(x).

This does not seem to match your description of the steps you took, so I wanted to be sure that I wasn't missing something, or that I hadn't misunderstood your steps. I fully acknowledge that I could have worked this out incorrectly.
~ Jay Daniel Fox
Reply
#6
Oh damn I incidently overwrite your post with my administrator powers, sorry for that.
My answer is now seen in your post *sigh* sorry for that.
Can you reestablish your old post? I write my answer again here:

(08/25/2009, 05:25 PM)jaydfox Wrote: And just to be sure, how are you calculating the C matrix for e^(x+1)-1?

Oehm, just compute the powerseries and then the Carleman matrix has as line m the m-th power of the powerseries.
Or do you want to know the formula?

Quote:I don't mean to be pedantic, but if I recall correctly, when I had investigated this line before, I found that the radius of convergence could be adversely affected when the power series was developed at a point other than 0, and truncation of the computed power series became necessary.

The powerseries should have as convergence radius the distance from the development point to the fixed point. (If we assume that the islog has no other singularities than the primary fixed point(s))
Reply
#7
(08/25/2009, 05:43 PM)jaydfox Wrote: Actually, I wanted to confirm something, because it's been a while since I last tried my hand at this issue (recentering).

We have the Abel function, A(x) = A(f(x))+1, correct? Here, f(x) is e^x. To solve, we find A(f(x))-A(x) = 1, and this in turn implies A*(C - I) = 1, where C is the Carleman matrix of f(x), I is the identity matrix, and A is the power series coefficients of the Abel function A, and 1 is the vector [1, 0, ...].
D'acord.

Quote:So to recenter, we have to substitute u(x) = x+1, for example. This implies that we solve A(f(u(x))) - A(u(x)) = 1, so A(Cu - Iu) = 1.

So Cu is the the Carleman matrix of e^(x+1), and Iu will be a suitable triangular Pascal matrix, the Carleman matrix of u(x).

My recentering takes place at the function level not on the (truncated) Matrix level.
We have a method to compute the Abel function of f that uses the powerseries expansion of f at 0.

Now we want to apply this method to a different development point x0.
Then I first move the development to 0.
g(x)=f(x+x0)-x0
or if \( \tau(x)=x+x0 \) we can also write it as:
\( g = \tau^{-1} \circ f \circ \tau \)
Now I can apply the method and find an Abel function \( \alpha \) of g.
It satisfies:
\( \alpha\circ g = s\circ \alpha \) with \( s(x)=x+1 \).
Thatswhy
\( \alpha\circ \tau^{-1} \circ \underbrace{\tau \circ g \circ \tau^{-1}}_{=f} = s\circ \alpha \circ\tau^{-1} \)

And so \( \alpha\circ \tau^{-1} \) or \( x\mapsto \alpha(x-x0) \) is an Abel function of \( f \).

This is completely independent on matrices. And one can theoretically do it also with other functions \( \tau \) than translations.
Reply
#8
(08/25/2009, 05:25 PM)jaydfox Wrote:
Quote:I don't mean to be pedantic, but if I recall correctly, when I had investigated this line before, I found that the radius of convergence could be adversely affected when the power series was developed at a point other than 0, and truncation of the computed power series became necessary.

The powerseries should have as convergence radius the distance from the development point to the fixed point. (If we assume that the islog has no other singularities than the primary fixed point(s))
Well, yes, the infinite power series will have such a radius. I'm speaking of what happens when we look at finite truncations.

Perhaps an easier example is to look at the power series for log(x+1). Truncate it to 100 terms and recenter the power series to x=0.25 (which would be equivalent to log(x+1.25). The infinite power series will have a radius of 1.25.

However, out of curiosity, what does the root test indicate the radius would appear to be, if you look at terms 70-90?

Try again with 200 terms, recenter, and look at the radius apparently indicated by the first 80 terms. Now look at the radius indicated by terms 140 to 170. See a pattern?

Recentering a finite truncation of the infinite power series does not readjust the radius of convergence. However, if we truncate the (already truncated) series at a suitable spot, then we can get a correct view. Thus, if we solve a 100 term system at one point, and then recenter, we must truncate to a shorter series, perhaps 25-50 terms, depending on how far we moved the system. If we move it outside the original radius of convergence, we must do so in multiple steps, each remaining within the radius of convergence, and each time we must truncate the series appropriately.

This is a effect of the underlying polynomial truncations, so although your method is different, I would be very surprised if the same effect was not tainting your results. Try using a truncation of only the first half of the terms you calculated, and see if these issues persist.
~ Jay Daniel Fox
Reply
#9
Well I am not completely sure what you mean with recentering. But is that still applicable with my previous explanations?
Anyway here are the root tests (however only until 100, because creating the Carleman matrix at non-0 is unexpectedly time consumptive) \( |a_n|^{-1/n} \)

islog_e developed at 0 and 1, root test:
       
isexp_e developed at 0 and 1, root test:
       

The coefficients of the development at 1 are of course not the coefficients of the Abel function itself but the coefficients of \( \alpha \) which is the Abel function of \( g(x)=e^{x+1}-1 \) in my previous post.

Edit: Ah now I get what you mean with recentering. If you have already a truncated powerseries and want to know the powerseries development at a different point then of course this different point must lie inside the convergence radius of the powerseries. But see Jay here it is different I dont recenter a (truncated) powerseries, but a function and *then* I compute its powerseries. I can do that at any point of the function without regarding convergence and convergence radiuses.
Reply
#10
(08/25/2009, 06:47 PM)bo198214 Wrote: Well I am not completely sure what you mean with recentering. But is that still applicable with my previous explanations?
It doesn't seem so, yet this behavior is so strikingly familiar to problems I experienced that somehow it must be similar. See my next comments.

Quote:Anyway here are the root tests (however only until 100, because creating the Carleman matrix at non-0 is really time consumptive)
If you look at the first two graphs for islog_e at 0 and 1, you should be able to see the effect I was describing. For the series developed at 0, there would seem to be a radius of convergence indicated asymptotically, until at about the 85th term the terms would seem to become inaccurate. For the series developed at 1, we again see a radius indicated asymptotically, but then somewhere around the 60th term, the effect is ruined, much earlier. Out of curiosity, if you recentered it to x=1.5 (outside the original radius of convergence), does the system solve properly?

If you try reverting the 60-term truncation for the series developed at x=1, you will likely see much better behavior of the isexp_e power series. Try it and let me know.

Update: Ach, I was looking at your graphs upside down! I always neglect the negation when using the root test, so I look at the reciprocal of the apparent radius of convergence. So where you graphs shoot up right at the end, mine would have shot down. So perhaps I'm misinterpreting your results.
~ Jay Daniel Fox
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Roots of z^z^z+1 (pictures in MSE) [update 8'2022] Gottfried 5 10,005 08/30/2022, 02:08 AM
Last Post: JmsNxn
  On n-periodic points of the exp() - A discussion with pictures and methods Gottfried 1 5,119 06/10/2020, 09:34 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)