Posts: 53
Threads: 13
Joined: Oct 2022
Let's take the function \(f\), which has a fixed point \(\tau\). Let's also consider a real number \(x_0\) that belongs to the biggest monotonic interval of \(f\) that contains \(\tau\) such that the infinite iteration of \(f(x_0)=\tau\), and that for all \(x_0\) that belongs to that interval, \(f(x_0)\) also belongs to that interval.
In this case, I just wonder if the distance between the \(n\)th iteration of \(f(x_0)\) and \(\tau\) keeps decreasing as \(n\) increases. \(\forall n\in\mathbb{N}, f^{n+1}(x_0)\tau<f^{n}(x_0)\tau\).
Is it true for all functions and starting number with these restrictions, or do we need other restrictions to make it always true?
Posts: 1,924
Threads: 415
Joined: Feb 2009
Yes that is true if the function is continu and monotone on that interval.
Notice a violation would imply there is another fixpoint in the interval but that violates the conditions.
So it must be true then if the function is continu and monotone on that interval.
So basically if f(x) is continu and f'(x) is never 0 or infinite on that closed interval.
Notice monotone implies the function and its inverse have unique inverses on that interval.
The inverse of a continu and monotone function is continu and monotone too afterall.
regards
tommy1729
Posts: 53
Threads: 13
Joined: Oct 2022
(04/17/2023, 11:21 PM)tommy1729 Wrote: Yes that is true if the function is continu and monotone on that interval.
Notice a violation would imply there is another fixpoint in the interval but that violates the conditions.
So it must be true then if the function is continu and monotone on that interval.
So basically if f(x) is continu and f'(x) is never 0 or infinite on that closed interval.
Notice monotone implies the function and its inverse have unique inverses on that interval.
The inverse of a continu and monotone function is continu and monotone too afterall.
regards
tommy1729
Thank you really much! I was almost sure of that but I needed confirmation. I almost finished putting all restrictions to my fractional iterated function formula, then I'll demonstrate it. I just have one last problem to solve.
Posts: 53
Threads: 13
Joined: Oct 2022
(04/18/2023, 07:07 AM)Shanghai46 Wrote: (04/17/2023, 11:21 PM)tommy1729 Wrote: Yes that is true if the function is continu and monotone on that interval.
Notice a violation would imply there is another fixpoint in the interval but that violates the conditions.
So it must be true then if the function is continu and monotone on that interval.
So basically if f(x) is continu and f'(x) is never 0 or infinite on that closed interval.
Notice monotone implies the function and its inverse have unique inverses on that interval.
The inverse of a continu and monotone function is continu and monotone too afterall.
regards
tommy1729
Is there any theorems about this property, or any ways to demonstrate it?
Posts: 1,214
Threads: 126
Joined: Dec 2010
Hey!
So what you are referring to in the complex plane is known as "The immediate basin". So you have stuck to real analysis; and in such cases the immediate basin is an interval.
Note, that the only fixed points that have immediate basins are fixed points such that \(0 < f'(\tau) < 1\). As Tommy pointed out, if \(f'(\tau) = 0\), then this goes out the window. If \(f'(\tau) > 1\), then we'll know that if \(g(f(z)) = f(g(z)) = z\) that \(0 <g'(\tau) < 1\); which allows us to perform the same iteration tricks, but with the inverse case.
These cases are pretty damn well studied; but there's always new stuff popping up. The final case is the neutral casewhich itself can be split into two cases. If \(f'(\tau)^n = 1\) for some \(n \in \mathbb{N}\)than this is known as the parabolic case. If it doesn't satisfy this, it is an irrational rotation of the unit disk; and you'll have to study very advanced things like siegel disks.
All of these ideas have their counterparts in real analysisand in my opinion it is even more complicated there! There's something inherently natural about using complex numbers for iteration theory (e.g: the Schroder function/Abel function/Julia function etc...). These things are, to say lightly, fairly awkward in real analysis in comparison to complex analysis.
Is there anything in specific you are interested in knowing? I'd be happy, as I'm sure Tommy would be, in answering questions. Excited to see what your method is! There are so many iteration techniques that it's getting ridiculous in recent years!
Regards, James
Posts: 53
Threads: 13
Joined: Oct 2022
04/19/2023, 09:18 AM
(This post was last modified: 04/19/2023, 09:47 AM by Shanghai46.)
(04/19/2023, 12:45 AM)JmsNxn Wrote: Hey!
So what you are referring to in the complex plane is known as "The immediate basin". So you have stuck to real analysis; and in such cases the immediate basin is an interval.
Note, that the only fixed points that have immediate basins are fixed points such that \(0 < f'(\tau) < 1\). As Tommy pointed out, if \(f'(\tau) = 0\), then this goes out the window. If \(f'(\tau) > 1\), then we'll know that if \(g(f(z)) = f(g(z)) = z\) that \(0 <g'(\tau) < 1\); which allows us to perform the same iteration tricks, but with the inverse case.
These cases are pretty damn well studied; but there's always new stuff popping up. The final case is the neutral casewhich itself can be split into two cases. If \(f'(\tau)^n = 1\) for some \(n \in \mathbb{N}\)than this is known as the parabolic case. If it doesn't satisfy this, it is an irrational rotation of the unit disk; and you'll have to study very advanced things like siegel disks.
All of these ideas have their counterparts in real analysisand in my opinion it is even more complicated there! There's something inherently natural about using complex numbers for iteration theory (e.g: the Schroder function/Abel function/Julia function etc...). These things are, to say lightly, fairly awkward in real analysis in comparison to complex analysis.
Is there anything in specific you are interested in knowing? I'd be happy, as I'm sure Tommy would be, in answering questions. Excited to see what your method is! There are so many iteration techniques that it's getting ridiculous in recent years!
Regards, James Thanks for your reply. I'm currently working on my iteration technique, especially one thing : the interval restrictions of the starting number \(x_0\). I actually already presented it in my other posts if you're interested. I need to study my formula in order to get these restrictions done, and this post was one of my hypothesis about iterated functions that I took into account. I know that in math we need to work with proofs, even when we study it. But I mainly work with my intuition. Most of the time I'm right, but some times I'm wrong. When I'll be done I'll demonstrate everything, which is kinda already done since I have pieces of demonstration in my papers.
If you want an example of hypothesis I make, is that for iterated functions that converge, and that \(f'(\tau)<0\)
The real iterations of it will give complex numbers. And my hypothesis, is that the argument of these complex numbers minus Tau change linearly as n increases. It equals 0 or \(\pi\) when the iteration is an integer, and is plus or minus \(\pi/2\) when the iteration has a decimal part of one half. I haven't proved it, but with my equation, the examples I've tested, other iteration methods and my intuition, I think it's true.
But I'm kinda slow in terms of studying my equation cause I can't full time do it. Rn I'm 16, I'm in hollyday but I need to study for my exams. And it's going to be even worse next year (really freaking worse).
Posts: 53
Threads: 13
Joined: Oct 2022
(04/19/2023, 12:45 AM)JmsNxn Wrote: Hey!
So what you are referring to in the complex plane is known as "The immediate basin". So you have stuck to real analysis; and in such cases the immediate basin is an interval.
Note, that the only fixed points that have immediate basins are fixed points such that \(0 < f'(\tau) < 1\). As Tommy pointed out, if \(f'(\tau) = 0\), then this goes out the window. If \(f'(\tau) > 1\), then we'll know that if \(g(f(z)) = f(g(z)) = z\) that \(0 <g'(\tau) < 1\); which allows us to perform the same iteration tricks, but with the inverse case.
These cases are pretty damn well studied; but there's always new stuff popping up. The final case is the neutral casewhich itself can be split into two cases. If \(f'(\tau)^n = 1\) for some \(n \in \mathbb{N}\)than this is known as the parabolic case. If it doesn't satisfy this, it is an irrational rotation of the unit disk; and you'll have to study very advanced things like siegel disks.
All of these ideas have their counterparts in real analysisand in my opinion it is even more complicated there! There's something inherently natural about using complex numbers for iteration theory (e.g: the Schroder function/Abel function/Julia function etc...). These things are, to say lightly, fairly awkward in real analysis in comparison to complex analysis.
Is there anything in specific you are interested in knowing? I'd be happy, as I'm sure Tommy would be, in answering questions. Excited to see what your method is! There are so many iteration techniques that it's getting ridiculous in recent years!
Regards, James
But as a question, I'd ask uypu whether my hypothesis about the arguments is true or false.
Posts: 1,214
Threads: 126
Joined: Dec 2010
(04/19/2023, 09:18 AM)Shanghai46 Wrote: (04/19/2023, 12:45 AM)JmsNxn Wrote: Hey!
So what you are referring to in the complex plane is known as "The immediate basin". So you have stuck to real analysis; and in such cases the immediate basin is an interval.
Note, that the only fixed points that have immediate basins are fixed points such that \(0 < f'(\tau) < 1\). As Tommy pointed out, if \(f'(\tau) = 0\), then this goes out the window. If \(f'(\tau) > 1\), then we'll know that if \(g(f(z)) = f(g(z)) = z\) that \(0 <g'(\tau) < 1\); which allows us to perform the same iteration tricks, but with the inverse case.
These cases are pretty damn well studied; but there's always new stuff popping up. The final case is the neutral casewhich itself can be split into two cases. If \(f'(\tau)^n = 1\) for some \(n \in \mathbb{N}\)than this is known as the parabolic case. If it doesn't satisfy this, it is an irrational rotation of the unit disk; and you'll have to study very advanced things like siegel disks.
All of these ideas have their counterparts in real analysisand in my opinion it is even more complicated there! There's something inherently natural about using complex numbers for iteration theory (e.g: the Schroder function/Abel function/Julia function etc...). These things are, to say lightly, fairly awkward in real analysis in comparison to complex analysis.
Is there anything in specific you are interested in knowing? I'd be happy, as I'm sure Tommy would be, in answering questions. Excited to see what your method is! There are so many iteration techniques that it's getting ridiculous in recent years!
Regards, James Thanks for your reply. I'm currently working on my iteration technique, especially one thing : the interval restrictions of the starting number \(x_0\). I actually already presented it in my other posts if you're interested. I need to study my formula in order to get these restrictions done, and this post was one of my hypothesis about iterated functions that I took into account. I know that in math we need to work with proofs, even when we study it. But I mainly work with my intuition. Most of the time I'm right, but some times I'm wrong. When I'll be done I'll demonstrate everything, which is kinda already done since I have pieces of demonstration in my papers.
If you want an example of hypothesis I make, is that for iterated functions that converge, and that \(f'(\tau)<0\)
The real iterations of it will give complex numbers. And my hypothesis, is that the argument of these complex numbers minus Tau change linearly as n increases. It equals 0 or \(\pi\) when the iteration is an integer, and is plus or minus \(\pi/2\) when the iteration has a decimal part of one half. I haven't proved it, but with my equation, the examples I've tested, other iteration methods and my intuition, I think it's true.
But I'm kinda slow in terms of studying my equation cause I can't full time do it. Rn I'm 16, I'm in hollyday but I need to study for my exams. And it's going to be even worse next year (really freaking worse).
GREAT OBSERVATION!
This is actually a very very deep problem, and I'll start with something easy and try to explain, what I think you are observing! It's one of the early tenets of Abel function theory.
Let's assume we have a function:
\[
f(z) : [1,1] \to [1,1]\\
\]
And let's assume that \(\tau = 0\); or that:
\[
f(0) = 0\\
\]
Let's additionally assume that:
\[
f'(0) < 0
\]
But I'm going to add an additional hypothesis (which actually isn't a hypothesis; you can show it):
\[
1 < f'(0)\\
\]
Any iteration of \(f\) on the interval \([1,1]\) will be complex! That is definitely true! So you're right there. But there's something more advanced we can say:
\[
f_0^{\circ s}(x) = f'(0)^s x + O(x^2)\\
\]
This defines EVERY SINGLE POSSIBLE ANALYTIC ITERATION OF \(f\) near zero. BUT! there are many \(f'(0)^s\)'s to choose from. Much like \((1)^s\); we can choose to write it as \(e^{\pi i s}\) or \(e^{\pi i s}\) or \(e^{3 \pi i s}\); where the general formula is \(j = (2k+1)\) for \(j,k \in \mathbb{Z}\); and \((1)^s = e^{j\pi i s}\).
Which is basically just saying, that near \(x < \delta\); this just looks like a trip around the circle. So if I take the principle branch of \(f'(0)^s\), then this would just be \(e^{s\log f'(0)}\), and we can safely assume that the imaginary part is \(\pi\), and the real part of \(\log\) is negative (but we don't need to worry ourselves with this).
But what happens when we compose these things, welllllllll! LOOKY LOOO
\[
f_0^{\circ s}(f_0^{\circ y}(x)) = f'(0)^{s+y} x + O(x^2)\\
\]
But, wait, isn't this just adding \(\pi i + \pi i\)?
And if I do it again:
\[
f_0^{\circ s}(f_0^{\circ y}(f_0^{\circ z}(x))) = f'(0)^{s+y + z} x + O(x^2)\\
\]
But wait..... isn't this just adding \(\pi i + \pi i + \pi i\)??
Which means that the imaginary argument in the exponent has a LINEAR relationship with iterated terms!
This is very very deep though; and is typically found when discussing Abel functions. Think of it as "how fast the function goes around the unit circle."
This isn't exactly what you are noticing, but damn is it close. I'm sure there's some fancy algebra to equate these things So the linear relationship you are noticing is SUPER warranted. I fully expect there to be an iteration that behaves how I believe you are describing.
Plus, super cool that you're sixteen. I think I joined this forum when I was sixteen! Super excited to see more work. Don't feel you need to rush things yet though. Remember to take your time and read a bunch.
Sincere regards, James
Posts: 53
Threads: 13
Joined: Oct 2022
04/21/2023, 07:09 AM
(This post was last modified: 04/21/2023, 08:44 AM by Shanghai46.)
(04/21/2023, 01:19 AM)JmsNxn Wrote: GREAT OBSERVATION!
This is actually a very very deep problem, and I'll start with something easy and try to explain, what I think you are observing! It's one of the early tenets of Abel function theory.
\[
f_0^{\circ s}(x) = f'(0)^s x + O(x^2)\\
\]
Thank you very much! I'll investigate that to try yo fit \(\tau\) in the démonstration. If I understood correctly the property you added is in the Abel function theory, correct? I'll check it afterwards. Usually in math I'm not really good in demonstration, but to find things with my intuition. And yeah I know I usually go too fast and precipitate. Sorry!
Regards, Pierre.
Posts: 53
Threads: 13
Joined: Oct 2022
04/21/2023, 09:07 PM
(This post was last modified: 04/21/2023, 09:07 PM by Shanghai46.)
(04/17/2023, 06:19 PM)Shanghai46 Wrote: Let's take the function \(f\), which has a fixed point \(\tau\). Let's also consider a real number \(x_0\) that belongs to the biggest monotonic interval of \(f\) that contains \(\tau\) such that the infinite iteration of \(f(x_0)=\tau\), and that for all \(x_0\) that belongs to that interval, \(f(x_0)\) also belongs to that interval.
In this case, I just wonder if the distance between the \(n\)th iteration of \(f(x_0)\) and \(\tau\) keeps decreasing as \(n\) increases. \(\forall n\in\mathbb{N}, f^{n+1}(x_0)\tau<f^{n}(x_0)\tau\).
Is it true for all functions and starting number with these restrictions, or do we need other restrictions to make it always true?
UPDATE : if \(f'(\tau)<0\), it is false since I managed to find a counter example
if \(f'(\tau)>0\), it is true, I managed to demonstrate it!
Regards
Shanghai46
