the extent of generalization
#7
Matt D Wrote:
Gottfried Wrote:Matt,

do you mean (a^b)^((a^b)^(a^b)...) (c-times repetition)
or (a^b)^b)^b... (c-times repetition)
or (a^b)^c ?

Gottfried

Gottfried,
I mean (a^(b^(b^...c "times"...))) with a,b,c complex, but specifically (i^(i^... i "times"...)).
Thanks for your interest,

Matt
Hi Matt -

it seems, I can only be of partial help here (if at all).
The matrix method, which I employ, implements the opposite view of things. Let me say it this way:
your definition asks for an operator, which
* assumes a start value v0= a
* applies an operation which makes it v1 = v0^b
* and then repeatedly v_{k+1} = (v_{k})^b, and this (for the integer case of c) repeatedly c-times. Of course, then also the question of fractional or complex iterations occur.

However, my method works differently. It
* assumes a startvalue w0 = b (of your example)
* applies an operation which makes it w1 = a^w0
* and then repeatedly w_{k+1} = a^w_k

So we may say, there is the difference in that
with your idea the same exponent is appended at top of a newly computed base,
and
with my idea the same base is appended to a newly computed exponent.

I don't have an idea, which (in my favorite view of things) matrix-operator would implement your idea, but -may be- we may find one.

So this way I cannot be of help here for your general question.



But at least for the detail-aspect, of "what is a fractional iteration" or even "what is complex iteration" (indicated by the parameter c) I can add some remarks.

Since I think in terms of a matrix-operator Bs or Ba (where the index s or a indicate the used parameter), which implements the c'th iteration by its matrix-power Bs^c for my method, there is a "canonical" way to implement fractional or complex powers of Bs - just use the matrix-logarithm or the c'th power by eigensystem-decomposition/"diagonalization".

Your actual question seems, with all three parameters identical, to be one-to-one reproducable with my matrix-operator, so I could compute my solution

V({i,i}^^i)~ = V(i)~ * (dV(log(i))*B)^i = V(i)~ * B_i ^i .........// see Andrew Robbins definition-pages for interpreting the {a,x}^^h-notation

by the following Eigensystem-decomposition:

Let W and D be the components (where D is diagonal), such that
B_i = (dV(log(i))*B) = W^-1 * D * W
then
V({i,i}^^i)~ = V(i)~ * W^-1 * D^i * W
and
D^i = diag ( d0^i, d1^i, d2^i, .... )
where these entries can be computed by simple scalar complex exponentiation.

But I'm not sure, whether the expected values for our different interpretation of tetration should be equal as well, since they simply define different "processes".

Well - to compute numerical approximations (for my method) one can simply use any eigensystem-solver applied to the matrix B_i , which is parametrized with the base-variable a=i (see my overview articles concerning this in this forum)

In fact, I have computed some examples for complex iteration-parameter c; I have some crude/sketchy graphs, and may upload them, if this is of interest.

The problem is, that the numerical approach by naive use of an EIgensystem-solver (or matrix-logarithm) is unknown in his approximation-quality besides of some "safe" ranges for the parameters. One has - for instance- at least to show, that not only the computed values converge with higher sizes of the used matrix, but also, that the entries of the matrix itself stabilize, if the dimension is increased.

Then the parameters of a problem {a,x}^^h (compare Andrew Robbins' notation-definition here) occur in the matrix equation
V(x)~ * B_a ^h = V(y)~

a -> in the construction of the matrix B_a
h -> in the exponents for the diagonal-matrix D^h of eigenvalues
x -> in the first column of V(x)~
the result {a,x}^^h in the first column of V(y)~
and can be computed by a powerseries in x, where the coefficients of the second column of B_a^h are used
y = {a,x}^^h = sum{k=0,inf} x^k * b_k
where the b_k are the entries of the 2'nd column of B_a^h

Again: this is computable, but to answer your question definitely (or at least as reasonable approximation) one should first find a matrix-operator, which implements your idea as a model for general parameters.

Gottfried
Gottfried Helms, Kassel
Reply


Messages In This Thread
the extent of generalization - by Matt D - 10/12/2007, 03:35 AM
RE: the extent of generalization - by GFR - 10/12/2007, 10:18 AM
RE: the extent of generalization - by Matt D - 10/12/2007, 03:14 PM
RE: the extent of generalization - by Gottfried - 10/12/2007, 05:41 PM
RE: the extent of generalization - by Matt D - 10/12/2007, 06:26 PM
RE: the extent of generalization - by Gottfried - 10/13/2007, 11:11 AM
RE: the extent of generalization - by andydude - 10/14/2007, 01:39 AM
RE: the extent of generalization - by Gottfried - 10/14/2007, 09:12 AM
RE: the extent of generalization - by Gottfried - 10/14/2007, 11:25 AM
RE: the extent of generalization - by Gottfried - 10/14/2007, 06:31 PM
RE: the extent of generalization - by Matt D - 10/15/2007, 04:52 PM
RE: the extent of generalization - by GFR - 10/12/2007, 09:15 PM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Superroots and a generalization for the Lambert-W Gottfried 22 69,554 12/30/2015, 09:49 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)