Dissecting Andrew's slog solution
#11
By the way, if you're wondering about the periodicity to the graph (which resembles the periodicity of the root test graphs), the explanation is simple. There are errors in both the magnitude and phase (complex rotation) of the terms, and these errors diminsh with increasing matrix size.

The errors in magnitude should be fairly stable, but the errors in phase will be magnified when the real part is approaching 0. The explanation is easier to understand if we look strictly at the cosine function (which gives us the real part).

\( \cos(1.5 + \epsilon) \) has a greater absolute error than \( \cos(\epsilon) \). If you then divide by the expected result, the errors are magnified further, because 1 divided by \( \cos(1.5 + \epsilon) \) is about 14.
~ Jay Daniel Fox
#12
jaydfox Wrote:If we call the fixed point x, then here's a look at the coefficients a_k of Andrew's slog, divided by the real part of x^(k+1), multiplied by k (to effect the derivative), and multiplied by abs(x^2).
What fixed point?

Quote:With a few exceptions in the first handful of terms, the values seem to be converging on 1.0579. As it turns out, \( {\Large x}^{-1.057939991157 i} \) is equal to \( \Large{x}^{1.057939991157*{\large \left|\frac{x^{\tiny -i}}{x}\right|} \). In other words, if you start at a point very near the fixed point, then 4.44695 real iterations and -1.05794 imaginary iterations will get you to the same point.
What?
#13
bo198214 Wrote:
jaydfox Wrote:If we call the fixed point x, then here's a look at the coefficients a_k of Andrew's slog, divided by the real part of x^(k+1), multiplied by k (to effect the derivative), and multiplied by abs(x^2).
What fixed point?
The fixed point of the principal branch of ln(x), approximately 0.318131505+1.337235701i

Quote:
Quote:With a few exceptions in the first handful of terms, the values seem to be converging on 1.0579. As it turns out, \( {\Large x}^{-1.057939991157 i} \) is equal to \( \Large{x}^{1.057939991157*{\large \left|\frac{x^{\tiny -i}}{x}\right|} \). In other words, if you start at a point very near the fixed point, then 4.44695 real iterations and -1.05794 imaginary iterations will get you to the same point.
What?
\( {\large \left(0.318131505+1.337235701 i\right)}^{-1.05793999115694 i}\ \approx\ {\large \left(0.318131505+1.337235701 i\right)}^{4.44695072006701} \)

4.44695072006701 = 1.05793999115694*(1.337235701/0.318131505)

Call the fixed point \( \chi \approx 0.318131505+1.337235701 i \). Then \( \chi^x \) and \( \chi^{-iy} \) are two complex spirals, which intersect when y is a multiple of 1.05793999115694
~ Jay Daniel Fox
#14
I'm still working on some graphs, but to give a sneak preview, here's a graph of \( \exp^{\circ z}(0) \), focussing mainly on complex iterations with real part between -4.5 and 1.5, imaginary part between 0i and 1.25i. The blue region is the rectangle with upper left corner -0.5+1.25i, lower right corner 0.5+0i. Each color is an integer real iterate of this "critical" interval. (A true critical interval would only go up about 1.0579i, but I don't have enough terms in Andrew's slog to try to get that fancy yet, and besides, I wanted to show some overlap.)

   

The dark lines represent integer real and integer imaginary iterations. The medium lines are quarter-real and quarter-imaginary iterates. The faint lines are 1/20th iterations (real and imaginary). The main reason for iterating 0 rather than 1 is because the radius of convergence of Andrew's slog limits me to this range. (The radius of convergence is limited by the singularity at the fixed point.)



As you can hopefully get a feel for, using iterated logarithms alone is a hopeless approach, as I've already demonstrated in the fractal nature of iterated ln(x) discussion. You cannot continuously iterate between two successive iterates, despite the fact that the integer iterates of any particular real number will behave as if they were part of an exponential complex spiral.

However, from this graph, we can see why the key to Andrew's solution is complex iterations. Start from any real number, preferably a number between 0 and 1. The slope of Andrew's slog determines how "fast" we move towards larger real numbers. If you go out at a right angle at that same "speed", you start heading towards the fixed point.

By the time you get to the ith iteration, you have a very nice, smooth curve. Notice the spiky nature of the region around \( \exp^{\circ -4}(0) \), versus the very smooth nature of the region around \( \exp^{\circ -4+i}(0) \). On the graph, The pointy tip of the red region is \( \exp^{\circ -4}(0) \), while the dark red "cross" is where \( \exp^{\circ -4+i}(0) \) is located.

I therefore hypothesize that if you take a large number of imaginary iterates, you'll get very close to the fixed point, and more importantly, using exponentiation will become increasingly valid, i.e., after using negative imaginary iterates to climb back out from the fixed point, you'll be closer to the real line.

The only problem is, how do we define imaginary iterates? Well, with Andrew's slog-based solution, the imaginary iterates seem to be well-behaved. The "correct" solution will in fact be very well-behaved. A solution that is off by some cyclic amount will have complex iterates that are not quite "parallel" (not the right word, but...), so that they will tend to bunch up as we approach the fixed point. I plan to test this with my solution, and show the difference, so that you can get a feel for what I mean by this.
~ Jay Daniel Fox
#15
jaydfox Wrote:The main reason for iterating 0 rather than 1 is because the radius of convergence of Andrew's slog limits me to this range. (The radius of convergence is limited by the singularity at the fixed point.)

Oh this is interesting. It is known that a regular iteration at a hyperbolic fixed point has no singularity at this fixed point. But the regular iterations are complex valued (not real for real argument, forget what I wrote about it elsewhere) and hence not equal to any real solution.
So any real solution must have singularities at the non-real fixed points (which are always hyperbolic).
#16
bo198214 Wrote:
jaydfox Wrote:Random speculation, but what if the coefficients look like the real part of a logarithmic spiral because one of the complex solutions based on the continuous iteration from the fixed point has a complex spiral for its coefficients? By dropping the imaginary parts, we recover a solution that yields real results...
So you think that Andrew's solution is the real part of the regular fractional iteration at a complex fixed point?!
What are then the real parts of the other solutions at a fixed point?
And is this then Kneser's solution at all (he started by regularly iterating at a certain fixed point and then by some transformation I did not fully understand yet he came up with a real solution)?
Questions over questions ...

See my post here:
http://math.eretrandre.org/tetrationforu...php?tid=59

Essentially, I think that Andrew's solution may be a sum of complex-valued functions which come in conjugate pairs, thus cancelling all the imaginary parts if we create a power series at a real point.

I've already demonstrated that there is a singularity in Andrew's slog at the primary fixed point (and, logically, at its conjugate). I've already numerically confirmed that the logarithms at the fixed points give an excellent approximation of all the terms in Andrew's slog beyond about the 10th or 20th, so the singularities appear in fact to be exactly due to these logarithms. This both confirms the approach of hyperbolic iteration from a complex fixed point, and confirms that Andrew's slog is quite probably "the" solution to tetration, at least for base e.

There is a residue to Andrew's slog, if you subtract out the logarithms at the fixed points. I'm currently studying the nature of this residue, hoping to find the non-fixed-point singularities.
~ Jay Daniel Fox


Possibly Related Threads…
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 45,745 05/16/2021, 11:51 AM
Last Post: Gottfried
  A note on computation of the slog Gottfried 6 19,436 07/12/2010, 10:24 AM
Last Post: Gottfried
  Improving convergence of Andrew's slog jaydfox 19 52,268 07/02/2010, 06:59 AM
Last Post: bo198214
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 7,356 09/10/2009, 06:47 PM
Last Post: bo198214
  sexp and slog at a microcalculator Kouznetsov 0 5,885 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 17,756 12/18/2007, 12:14 AM
Last Post: jaydfox
  SAGE code implementing slog with acceleration jaydfox 4 13,473 10/22/2007, 12:59 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 36,773 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)