Attempting to compute the kslog numerically (i.e., Kneser's construction)
#1
Well, I was fairly certain the Kneser's solution would be equivalent to the intuitive slog, as there is a uniqueness condition which I've seen explained a couple different ways, but it boils down to the same thing. So I decided to put this to the test, at least within the limits of computational accuracy.

Due to the nature of the singularities in the regular slog at 0 and 1, the kslog becomes rather difficult to compute accurately.

However, after tweaking and experimenting and tweaking some more, I've found a setup that yields decent accuracy, to at least 20-25 bits over most of the unit interval.

In order to prevent my results from being biased by my preference for the intuitive slog, I initially parameterized the unit interval with a second-order polynomial:
(-2*sqrt(6) + 6)*z + R(6*sqrt(6)-15)*z^2 + R(-4*sqrt(6)+10)*z^3

This was done to ensure a continuous second derivative on the boundary of the region that I was trying to map to the unit disk (for the riemann mapping phase). The singularity probably makes this step unnecessary, but I felt it better to be safe than sorry. I did not try to make the initial parameterization any more "accurate" than this, as this might introduce bias into the results (e.g., if I merely used the isexp as the parameterization).

I calculated 2835 equally spaced points along the parameterized boundary, which is 27*105. This allowed me to solve systems with 105, 315, 945, and 2835 points, and monitor the convergence. The 105 allows me to break the point set into groups of 6, 10, or 14 intervals, in order to compute the integrals more accurately (more on that later).

Incidentally, convergence is slow, yielding just under 3 bits of precision for each tripling of system size. This more or less corresponds to 2 bits per doubling, compared with about 6 bits per system doubling for my accelerated islog solution.

On the other hand, the kslog is extremely numerically stable. A 1200x1200 system for the accelerated islog requires about 1600 bits of precision just to "break even", while the kslog requires about 11-15 bits for the same size system.

Anyway, before I start getting into the gritty details (some of which I'll probably not get to until tomorrow), I wanted to post my comparison of the kslog and the islog.

The SAGE plotting library refuses to label the axes on this graph, so I drew light grey lines at intervals of 10^-7, with a darker line at 10^-6. As you can see, the accuracy over most of the unit interval is about 10^-7, about 23 bits' worth, which is about how accurate the computed kslog is (based on convergence analysis). So roughly to within the accuracy of the calculations, the kslog and the islog are probably the same. The error is at most an order of magnitude larger than I expected, based on convergence analysis, so I'm not ready to declare this issue settled just yet. I'm still trying to tune my matrix system, which is temperamental due to the singularities.

This graph is the difference between the 2835 kslog points I computed and the islog at the same points, setting kslog(0.5)=islog(0.5).

   

Note that due to the way the kslog is calculated, I cannot find a value for kslog(0) or kslog(1), so there's not an obvious way to compare the kslog and islog directly, other than to try to reduce the RMS error for all 2835 points. The derivatives can be compared directly, and I'll be trying that next.
~ Jay Daniel Fox
Reply
#2
Okay, here is the convergence analysis. The blue lines represent (from top to bottom) the error (in bits) in the 105, 315, 945, and 2835 point systems, relative to the 2835 point 10-interval interpolated system.

   

A note here: The solutions are found by solving an integral using a system of equations, which basically means solving a matrix equation. Just as extra accuracy for an integral can be achieved by using weights to simulate polynomial interpolation, I've achieved extra accuracy in the matrix system by weighting each point appropriately. Note that this must be tuned by hand. For example, Fejer quadrature would seem an obvious choice here, but the singularities at each end of the interval cause Fejer quadrature to give horrible results, which I did not expect.

Anyway, to demonstrate the increased accuracy empirically, note the red lines. These correspond to the 105, 315, and 945 point systems, using 10-interval interpolation polynomials (11-points, with common endpoints). Note that in all three cases, they are about 2-3 bits more accurate than the "rectangle" quadratures. Indeed, they are almost as accurate as using a system three times as large!

Convergence is fairly linear, about 2.5 to 3 bits per system doubling. And the convergence near the singularities is even better, which is a good thing, as the accuracy is quite bad near the singularities (0 and 1, the endpoints of the interval).

My next test will be to use 14 point intervals. For a large enough system, this should improve accuracy, but I anticipate the 105-point system might actually get worse.
~ Jay Daniel Fox
Reply
#3
(09/24/2009, 12:17 AM)jaydfox Wrote: Anyway, to demonstrate the increased accuracy empirically, note the red lines. These correspond to the 105, 315, and 945 point systems, using 10-interval interpolation polynomials (11-points, with common endpoints).
Also, if you're wondering how I divide a 105-interval region into 10-interval sub-regions, note that the end regions are 7.5 intervals wide. So I would have an interpolating polynomial from 0 to 15/210, using the points 1/210, 3/210, 5/210, 7/210, 9/210, 11/210, 13/210, and 15/210. And thus on the other end of the interval as well (reflected, more or less).

As I start posting details and code, hopefully this will begin to make more sense.
~ Jay Daniel Fox
Reply
#4
Okay, now for some details.

I've basically followed the steps outlined by Henryk in his summary of Kneser's paper:
Kneser's Super Logarithm

Before I start, I'm going to put up a legend of sorts. Kneser's paper makes use of a strange font for labelling the various images of curves and regions, so I thought I'd use his convention, to give a frame of reference to his paper. In the legend I show the scripted letter and the same letter in a plain font, for reference, as well as the relationship between the various images.

   

So, I start with the same regions Henryk showed in his post. I don't actually use the blue lines in my calculations; they are there merely to show graphically the relationship between the various graphs (important when we get to the region that I map to the unit disk).

   

I iteratively perform the natural logarithm and rescale to get the chi function. Note that my graph is rotated 180 degrees relative to Henryk's, but mine matches Kneser's, and I've triple-checked my results:

   

First note that I've added C-1, D''-1, D'2, and C2. This is to allow me to close off all three regions, K-1, K0, and K1. Note that I've continued the D curves further than Kneser (who was merely sketching, and didn't have access to SAGE or a similar product Tongue ). I did this for two reasons. One, the curves have a fractal nature that is fascinating in its own right, quite apart from its relevance here. See my thread:
The fractal nature of iterated ln(x) [Bandwidth warning: lots of images!]

The second reason is that it helps us see that the D curves are not simple, and thus we cannot hope for a simple description of them. This extends likewise to the F curves (the image of the logarithm of the D curves). Kneser makes a detailed argument that each region L0, L1, etc., is simply connected to its immediate neighbors and is disjoint with all other regions. This seems obvious at first, but is quite a bit less obvious when you see the fractal curves of F0, F1, etc., and the way that they seem to overlap (they don't).
~ Jay Daniel Fox
Reply
#5
Before moving on, I wanted to zoom out on the graph of the chi function, to really show the fractal behavior.

   

   

   

   

   

Note that while my fractal "star" is finite in size and complexity, the true chi "star" grows more elaborate as we zoom further out, and grows to infinite size and complexity. Someday I might try to do better justice to this graph by taking it a few iterations further before performing several hundred iterations. As it is, the graphs here represent the detail from about 8 iterated logarithms, give or take, and they took over a day to assemble. I've already far exceeded the limits of machine precision and had to be a little clever to continue the graphs accurately.
~ Jay Daniel Fox
Reply
#6
The next step in Kneser's construction is to take the logarithm of the chi function, which gives us a new set of images of the various regions and curves:

   

At this point, Kneser continues to describe the process, but does not provide further graphs. He is also somewhat vague on details, so I've decided to simply do it the way which seems obvious to me. Note that the L region is periodic, with period equal to c, where c is the (logarithm of the) fixed point.

The obvious choice of how to proceed is to make the periodicity equal to 2*pi*i, so that exponentiating gives a region that overlaps itself perfectly. Here is what the new logarithm looks like:

   

In order to make it easier to see the detail, I'm going to show a single region, correspond to one full period:

   

The final step, as should be obvious, is to exponentiate. This yields the following region:

   
~ Jay Daniel Fox
Reply
#7
Now to zoom out, to provide detail:

   

   

   

   

   

It just keeps going. Sorry about the axis labels, I don't have a lot of control over them.
~ Jay Daniel Fox
Reply
#8
And now, of course, we map this region back to the unit disk. Details tomorrow!
~ Jay Daniel Fox
Reply
#9
(09/25/2009, 12:45 AM)jaydfox Wrote: And now, of course, we map this region back to the unit disk. Details tomorrow!

Its a pleasure to look at this quality work, partiularly the extension of the chi star. But I am even more curious how you find the Riemann-mapping!

PS: Yes I wondered why my chi computation was rotated compared to Kneser's but in the moment I dont have the time check the issue.
Reply
#10
(09/25/2009, 07:56 AM)bo198214 Wrote:
(09/25/2009, 12:45 AM)jaydfox Wrote: And now, of course, we map this region back to the unit disk. Details tomorrow!

Its a pleasure to look at this quality work, partiularly the extension of the chi star. But I am even more curious how you find the Riemann-mapping!

Okay, now for the Riemann mapping itself. Here, I will admit that I was stumped for a while, but after doing fair amount of research, I saw that there were a couple of existing computational approaches, relying on so-called "kernels". Various kernels of interest were the Cauchy kernel, the Bergman kernel, and the Szego kernel. But implementation details were hard to find, until I stumbled across the following paper [1] after much googling:
http://eprints.utm.my/1753/

Note that this only gets us the mapping to the unit disk. After that, it's a simple matter of taking the logarithm and dividing by 2*pi*i to get back a unit interval. An appropriate constant is added as needed to place 0 and 1 at -1 and 0, respectively, and we now have the half kslog! Mirroring across the real line completes the puzzle.

Armed with the referenced paper, I began experimenting with the first phase, which involves creating a parameterized closed curve around a point of interest, and then solving an integration problem numerically, via matrix methods. I initially performed the test with functions I could compute exactly, in order to verify the results against a theoretically exact answer. The method is quite stable: indeed, in my tests, I can solve a 400x400 matrix with merely 53-bit (double) precision, and retain 46 bits (measured versus a solution using 1024 bits). That means I lose only 7 bits during the matrix solution!

This in turn means that a very large system could be solved with rather low precision math, which is good because convergence isn't terribly fast in the case of the kslog (because of the singularities). However, it should be good enough to confirm whether this construction gives (probably) the same results as the intuitive method or the Cauchy integral method. So far, the results are promising.

If you read the referenced paper (which I've also attached, as the website seems to be down at the moment), you will see that the matrix solution is the solution to a special type of integral equation. Because it's merely an integral, we can use polynomial interpolations to improve accuracy, and I've confirmed this numerically (empirical, not theoretical evidence). Note that the singularities at the end of the boundary make unnecessary the "trapezoidal rule" used by the authors. Abandoning the trapezoidal rule and treating the boundary as if it were not cyclic, actually improved convergence (and, hypothetically, the accuracy). As I mentioned earlier, Fejer quadrature actually failed to converge, and produced horribly inaccurate results.

[1] Murid, Ali H. M. and Mohamed, Nurul Akmal
Computation of the Riemann Map using Integral Equation Method and Cauchy’s Integral Formula
In: Simposium Sains Matematik, 2004, Pulai Springs, Skudai. (Unpublished)


Attached Files
.pdf   AliHassanMohamedMurid2004_.pdf (Size: 95.99 KB / Downloads: 1,026)
~ Jay Daniel Fox
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Quickest way to compute the Abel function on the Shell-Thron boundary JmsNxn 1 1,593 10/10/2023, 05:17 AM
Last Post: leon
  Writing Kneser's super logarithm using values of Kneser at a single point JmsNxn 1 1,258 04/21/2023, 04:26 AM
Last Post: JmsNxn
Question Computing Kneser's Super Logarithm and its Analytic Continuation Catullus 2 2,433 07/10/2022, 04:04 AM
Last Post: Catullus
  fast accurate Kneser sexp algorithm sheldonison 40 155,059 07/03/2022, 06:38 AM
Last Post: JmsNxn
  Kneser-iteration on n-periodic-points (base say \sqrt(2)) Gottfried 11 13,283 05/05/2021, 04:53 AM
Last Post: Gottfried
  "Kneser"/Riemann mapping method code for *complex* bases mike3 2 13,504 08/15/2011, 03:14 PM
Last Post: Gottfried
  Attempt to make own implementation of "Kneser" algorithm: trouble mike3 9 33,686 06/16/2011, 11:48 AM
Last Post: mike3
  An incremental method to compute (Abel) matrix inverses bo198214 3 16,844 07/20/2010, 12:13 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)