Iteration basics
#1
What is iteration of a function in plain simple words- this word gives me headache. Or perhaps give me a link , or place in this thread. This name has been a barrier for me even if I have read it 1000 times and also in articles- I do not get it- something basic is missing... What is iteration of a function, please.

Thank You in advanceSmile

Ivars
#2
Common difficulty. Sometime brain blocks...

Assume
Code:
´
f(x) = x^2+2
Code:
´
a) f(x)^2 = (x^2+2)^2           // taking f to the 2'nd power
          =  x^4 + 4x^2 +4

b) f°2(x) = f(f(x))             // iteration of f
          = (f(x))^2  + 2
          =  (x^4 + 4x^2 +4) + 2
          = x^4+4x^2+6

c)  e^f(x) = ...                // exponentiation of f

Assume
Code:
´
       f(x) = e^x
            = 1 + x/1! + x^2/2! + ....
Code:
´
a)       f(x)^2 = (1 + x/1! + x^2/2! +...)^2   // taking f to the second power
                =  1 + 2*x/1! + 4*x^2/2! + 8*x^3/3! + 16*x^4/4! +...
                = f(2x)

b)       f°2(x) = f(f(x))                      // iteration of f
                = f(e^x)
                = 1 + e^x/1! + e^2x/2! + ...

c)       e^f(x) = ...                          // exponentiation of f
                     // is in this case equal to second iteration

(You had this right previous times Smile )

Gottfried
Gottfried Helms, Kassel
#3
So easy!

So iteration of a function is function repeatedly applied. I-times applied function is as strange as I times 2*2*2*...., but different. I- times applied multiplication is multiplication operation applied i times= 2^i. 2+2+2+ i times is either 2 times i or i times 2. Hmm.

There must be a wealth of possible outcomes.

i times applied tetration is giving- what? Imaginary height of power tower is also strange.

a[4]i = ?

(i^(1/i))[4]infinity = i so

a[4]i= a[4]((i^(1/i))[4] infinity)

I have no idea what are rules in those brackets. We can nest i's like this forever.

Thanks, Gottfried.

Ivars
#4
Ivars Wrote:So iteration of a function is function repeatedly applied. I-times applied function is as strange as I times 2*2*2*...., but different.

Exactly. The thing is of course that we have an immediate definition of applying something n times, n natural number, however everything else needs some rules to extrapolate the meaning of fractional iteration, real iteration or complex iteration.

For example
2*I is 2 I-times repeated in addition, whatever that should mean, but we have the commutativity of multiplication, which we surely want to keep also for complex numbers, and hence 2*I=I*2=I+I. Thats an easy way to derive.

2^I, what should that mean? For that we have the function \( e^x \) which can be given as a power series and thatswhy can also be applied to complex arguments \( z \). And there it turns out that \( e^{I*\phi}=\cos(\phi)+I*\sin(\phi) \) and so we can derive
\( 2^I=e^{\ln(2)*I}=\cos(\ln(2))+I*\sin(\ln(2)) \).

Finally what about \( f^{\circ I} \). Perhaps we first ask for the already notationally occupied \( f^{\circ -1} \). And perhaps before that we start with the basics:

On functions one can define the composition operation \( \circ \). The function \( f\circ g \) is the function gained by first to apply the function \( g \) and then to apply the function \( f \), \( (f\circ g)(x)=f(g(x)) \).
Correspondingly one defined \( f^{\circ n} \) to be the \( f \) composed n times, e.g. \( f^{\circ 3}=f\circ f\circ f \), \( f^{\circ 3}(x)=f(f(f(x))) \).

Now we have what n times iteration means, n being a natural number. The next question would be how to extend this to integer numbers, i.e. including negative numbers. For seeing this we first notice that
\( f^{\circ(n+m)}=f^{\circ n}\circ f^{\circ m} \) and surely we want to keep this law also for other number domains, hence:

\( \text{id}=f^{\circ 0}=f^{\circ(1 + -1)}=f^{\circ 1}\circ f^{\circ -1}=f\circ f^{\circ -1} \)
\( x=f(f^{\circ -1}(x)) \) or
\( x=f(y) \), where \( y=f^{\circ -1}(x) \).
And one immediately sees that \( f^{\circ -1} \) is the inverse function of \( f \) (only if \( f \) was bijective of course).
And this already deeply ingrained in mathematics if one writes \( f^{-1} \) (however this is mistakable for \( 1/f \) so we use the more unambigous notation \( f^{\circ -1} \).)

Now the next question is what is \( f^{\circ 1/n} \) and there we can see by keeping our previous law \( f=f^{\circ 1}=f^{1/n+....+1/n}=f^{1/n}\circ \dots\circ f^{\circ 1/n} \) that the n times iteration of \( f^{1/n} \) must be again the function \( f \).
for example \( f^{\circ 1/2}=g \) is a function such that \( g^{\circ 2}=f \). Though it turns out that \( f^{\circ 1/n} \) is generally not uniquely determined by this demand.

But under certain conditions at a fixed point of \( f \) there is unique solution, called regular (fractional) iteration. Such a condition is for example that \( (f^{\circ 1/2})'(a)=(f'(a))^{1/2} \), or more generally for real iterations \( (f^{\circ t})'(a)=(f'(a))^{t} \). Or in words: that the derivation of \( f^{\circ t} \) at the fixed point \( a \) is the \( t \)-th power of the derivation of \( f \) at \( a \).

This makes surely sense as if one looks at the iteration of the function \( f(x)=cx \), one gets \( f^{\circ n}(x)=c^n x \). Where \( c \) is the derivation at the fixed point \( x=0 \). And one want to keep this law for non-natural \( n \).

Via this method you can determine what is meant to be a fractional iteration of \( f \), i.e. \( f^{\circ m/n}=(f^{\circ 1/n})^{\circ m} \) where \( f^{\circ 1/n} \) is determined by the previous explanation and fixed point condition. And by continuity we can extend this also to real iterations.

So what is now meant by complex iterations? For this one uses another method, the so called Abel function. An Abel function \( \phi \) for the function \( f \) is defined by \( \phi(f(x))=\phi(x)+1 \). The Abel function counts the iterations of \( f \):
\( \phi(f(f(x)))=\phi(f(x))+1=f(x)+2 \)
\( \phi(f(f(f(x))))=\phi(f(f(x)))+1=f(x)+2+1=f(x)+3 \)
\( \phi(f^{\circ n}(x))=\phi(x)+n \).

So if \( \phi \) is bijective we have
\( f^{\circ n}=\phi^{\circ-1}(\phi(x)+n) \).

This Abel function is closely related to the logarithm or hyperlogarithms.
If we take as function \( f(x)=cx \) the logarithm is an Abel function for \( f \): \( \log_c(f(x))=\log_c(cx)=\log_c(x)+\log_c(c )=\log_c(x)+1 \).

Or if \( f(x)=b^x \) then a superlogarithm (as used by Andrew) is defined by \( \text{slog}_b(e^x)=\text{slog}_b(x)+1 \), \( \text{slog}_c(1)=0 \).

The equation \( f^{\circ n}(x)=\phi^{\circ-1}(\phi(x)+n) \) is of course easily applicable to complex \( n \) nothing needs to be changed just keep the law for complex \( n \) too. E.g.
\( f^{\circ I}(z)=\phi^{\circ -1}(\phi(z)+I) \).

And in our case \( f(z)=b^z \), where an Abel function is \( \text{slog}_b \) we have

\( \exp_b^{\circ I}(z)=\text{slog}_b^{\circ -1}(\text{slog}_b+I) \). The occuring inverse of the Abel function is of course our later \( \text{slog}_b^{\circ -1}(z)=b[4]z \). Like the inverse of \( \log_b \) is \( b[3]x \).

In the case of regular iteration, i.e. if \( f^{\circ t}(z)=\phi^{\circ -1}(\phi(z)+t) \) is the regular iteration of the function \( f \) at some fixed point, the Abel function would be called regular Abel function. Until now it is not clear whether the by Andrew defined slog is a regular Abel function at the lower fixed point of \( f(x)=b^x \) (, if existing).

Quote:a[4]i = ?

\( a[4]I=\exp_a^{\circ I}(1)=\text{slog}_a^{\circ -1}(I) \), where slog and its inverse can be expanded into powerseries'.
#5
Thanks. This is great. Will return to this page few times while rereading many posts.And may be have a question, evenSmile.

Actually, very interesting thing, Abel function.

Ivars
#6
Ivars Wrote:Thanks. This is great. Will return to this page few times while rereading many posts.And may be have a question, evenSmile.

Ivars

In this thread it may be appropriate to recall the introduction into iteration in continuous interation and possibly also in operators . I've made some update in the first one; but it surely needs some extensions, especially in respect to the problems of different fixpoints et al. and non-(easy) invertible functions.

Gottfried
Gottfried Helms, Kassel
#7
So I have a question now:

Is it possible to integrate /differentiate iterated functions over iteration parameter (e.g. t in fot(x)) if that is continuous real or complex number in some interval?

Ivars
#8
Ivars Wrote:So I have a question now:

Is it possible to integrate /differentiate iterated functions over iteration parameter (e.g. t in fot(x)) if that is continuous real or complex number in some interval?

Ivars
According to my derivations in Continuous Iteration we have if base b=e^(1/e) a powerseries in h (if h is the height parameter) which may then be differentiated/integrated by usual diff/int of terms. If base b differs from this, we have, due to eigensystem-analysis a series, where h is in the exponent, thus a modified dirichlet-series (I'd say). This can also be diff'ed/int'ed termwise, but has then something like a*u^h and the derivative of such a term is then a*log(u)*u^h
Gottfried
Gottfried Helms, Kassel
#9
So first we have to prove that function fot (x) can be expanded in powerseries of h (or t) in the vicinity of the point x ( e.g x=e^(1/e)) and that would define it as differentiable per t, but only for that basis ( x) or some small region around it?

Ivars
#10
Well, these are really good questions. If you are familiar with Taylor series, then you will know that you can represent any analytic function as a power series:

\(
f(x) = \sum_{k=0}^{\infty} \frac{x^k D_x^k f (0)}{k!}
= f(0)
+ x f'(0)
+ \frac{x^2}{2} f''(0)
+ \frac{x^3}{6} f'''(0)
+ \cdots
\)

This is the first iterate of the function. When a function is iterated, its output becomes its next input, so \( f^3(x) = f(f(f(x))) \) and in general we write \( f^n(x) \) when n is an integer, or \( f^t(x) \) when t is non-integer. Finding the derivative of \( f^t(x) \) is one of the goals of natural iteration.

Continuous iteration can be classified into two major methods, because there are two primary ways that you can turn \( f^t(x) \) into a 1-variable power series, because there are two variables: x and t. One could also construct a 2-variable power series, but thats complicated, so I wont do that now.

Here is the power series that corresponds to regular iteration:
\(
f^t(x)
\ =\ \sum_{k=0}^{\infty} x^k G_k(t)
\ =\ f^t(0)
\ +\ x \left[D_x f^t (x)\right]_{x=0}
\ +\ \frac{x^2}{2} \left[D_x^2 f^t (x)\right]_{x=0}
\ +\ \cdots
\)

And here is the power series that corresponds to natural iteration:
\(
f^t(x)
\ =\ \sum_{k=0}^{\infty} t^k H_k(x)
\ =\ f^0(x)
\ +\ t \left[D_t f^t (x)\right]_{t=0}
\ +\ \frac{t^2}{2} \left[D_t^2 f^t (x)\right]_{t=0}
\ +\ \cdots
\)

Whats weird about these two methods, is that what they are doing is not really finding derivatives, but finding the coefficients in the power series, but because the coefficients in the power series are related to the derivatives, you can find the derivatives with these methods. For example, if you wanted to find the second derivative of \( f^t(x) \) with respect to t, then you could apply natural iteration to find \( H_2(x) \), then solve for the derivative to obtain \( \left[D_t^2 f^t (x)\right]_{t=0} = 2 H_2(x) \).

So if you are interested in derivatives with respect to x, then you should search this forum for "regular" and if you are interested in derivatives with respect to t, then you should search this forum for "natural" and see what you find in your search. Smile

Andrew Robbins


Possibly Related Threads…
Thread Author Replies Views Last Post
  [To Do] Basics of Iterating Relations MphLee 0 341 12/27/2022, 07:57 PM
Last Post: MphLee
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 35,094 06/11/2022, 12:24 PM
Last Post: tommy1729
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 5,529 09/04/2011, 05:59 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)