Ivars Wrote:So iteration of a function is function repeatedly applied. I-times applied function is as strange as I times 2*2*2*...., but different.

Exactly. The thing is of course that we have an immediate definition of applying something n times, n natural number, however everything else needs some rules to extrapolate the meaning of fractional iteration, real iteration or complex iteration.

For example

2*I is 2 I-times repeated in addition, whatever that should mean, but we have the commutativity of multiplication, which we surely want to keep also for complex numbers, and hence 2*I=I*2=I+I. Thats an easy way to derive.

2^I, what should that mean? For that we have the function \( e^x \) which can be given as a power series and thatswhy can also be applied to complex arguments \( z \). And there it turns out that \( e^{I*\phi}=\cos(\phi)+I*\sin(\phi) \) and so we can derive

\( 2^I=e^{\ln(2)*I}=\cos(\ln(2))+I*\sin(\ln(2)) \).

Finally what about \( f^{\circ I} \). Perhaps we first ask for the already notationally occupied \( f^{\circ -1} \). And perhaps before that we start with the basics:

On functions one can define the composition operation \( \circ \). The function \( f\circ g \) is the function gained by first to apply the function \( g \) and then to apply the function \( f \), \( (f\circ g)(x)=f(g(x)) \).

Correspondingly one defined \( f^{\circ n} \) to be the \( f \) composed n times, e.g. \( f^{\circ 3}=f\circ f\circ f \), \( f^{\circ 3}(x)=f(f(f(x))) \).

Now we have what n times iteration means, n being a natural number. The next question would be how to extend this to integer numbers, i.e. including negative numbers. For seeing this we first notice that

\( f^{\circ(n+m)}=f^{\circ n}\circ f^{\circ m} \) and surely we want to keep this law also for other number domains, hence:

\( \text{id}=f^{\circ 0}=f^{\circ(1 + -1)}=f^{\circ 1}\circ f^{\circ -1}=f\circ f^{\circ -1} \)

\( x=f(f^{\circ -1}(x)) \) or

\( x=f(y) \), where \( y=f^{\circ -1}(x) \).

And one immediately sees that \( f^{\circ -1} \) is the inverse function of \( f \) (only if \( f \) was bijective of course).

And this already deeply ingrained in mathematics if one writes \( f^{-1} \) (however this is mistakable for \( 1/f \) so we use the more unambigous notation \( f^{\circ -1} \).)

Now the next question is what is \( f^{\circ 1/n} \) and there we can see by keeping our previous law \( f=f^{\circ 1}=f^{1/n+....+1/n}=f^{1/n}\circ \dots\circ f^{\circ 1/n} \) that the n times iteration of \( f^{1/n} \) must be again the function \( f \).

for example \( f^{\circ 1/2}=g \) is a function such that \( g^{\circ 2}=f \). Though it turns out that \( f^{\circ 1/n} \) is generally not uniquely determined by this demand.

But under certain conditions at a fixed point of \( f \) there is unique solution, called regular (fractional) iteration. Such a condition is for example that \( (f^{\circ 1/2})'(a)=(f'(a))^{1/2} \), or more generally for real iterations \( (f^{\circ t})'(a)=(f'(a))^{t} \). Or in words: that the derivation of \( f^{\circ t} \) at the fixed point \( a \) is the \( t \)-th power of the derivation of \( f \) at \( a \).

This makes surely sense as if one looks at the iteration of the function \( f(x)=cx \), one gets \( f^{\circ n}(x)=c^n x \). Where \( c \) is the derivation at the fixed point \( x=0 \). And one want to keep this law for non-natural \( n \).

Via this method you can determine what is meant to be a fractional iteration of \( f \), i.e. \( f^{\circ m/n}=(f^{\circ 1/n})^{\circ m} \) where \( f^{\circ 1/n} \) is determined by the previous explanation and fixed point condition. And by continuity we can extend this also to real iterations.

So what is now meant by complex iterations? For this one uses another method, the so called Abel function. An Abel function \( \phi \) for the function \( f \) is defined by \( \phi(f(x))=\phi(x)+1 \). The Abel function counts the iterations of \( f \):

\( \phi(f(f(x)))=\phi(f(x))+1=f(x)+2 \)

\( \phi(f(f(f(x))))=\phi(f(f(x)))+1=f(x)+2+1=f(x)+3 \)

\( \phi(f^{\circ n}(x))=\phi(x)+n \).

So if \( \phi \) is bijective we have

\( f^{\circ n}=\phi^{\circ-1}(\phi(x)+n) \).

This Abel function is closely related to the logarithm or hyperlogarithms.

If we take as function \( f(x)=cx \) the logarithm is an Abel function for \( f \): \( \log_c(f(x))=\log_c(cx)=\log_c(x)+\log_c(c )=\log_c(x)+1 \).

Or if \( f(x)=b^x \) then a superlogarithm (as used by Andrew) is defined by \( \text{slog}_b(e^x)=\text{slog}_b(x)+1 \), \( \text{slog}_c(1)=0 \).

The equation \( f^{\circ n}(x)=\phi^{\circ-1}(\phi(x)+n) \) is of course easily applicable to complex \( n \) nothing needs to be changed just keep the law for complex \( n \) too. E.g.

\( f^{\circ I}(z)=\phi^{\circ -1}(\phi(z)+I) \).

And in our case \( f(z)=b^z \), where an Abel function is \( \text{slog}_b \) we have

\( \exp_b^{\circ I}(z)=\text{slog}_b^{\circ -1}(\text{slog}_b+I) \). The occuring inverse of the Abel function is of course our later \( \text{slog}_b^{\circ -1}(z)=b[4]z \). Like the inverse of \( \log_b \) is \( b[3]x \).

In the case of regular iteration, i.e. if \( f^{\circ t}(z)=\phi^{\circ -1}(\phi(z)+t) \) is the regular iteration of the function \( f \) at some fixed point, the Abel function would be called regular Abel function. Until now it is not clear whether the by Andrew defined slog is a regular Abel function at the lower fixed point of \( f(x)=b^x \) (, if existing).

Quote:a[4]i = ?

\( a[4]I=\exp_a^{\circ I}(1)=\text{slog}_a^{\circ -1}(I) \), where slog and its inverse can be expanded into powerseries'.