I've come up with a new idea attached to rational operators. It first involves defining a new set of operators that behave as addition, Tommy_r already noted them and talked about the distribution law, he even talked a bit about what I'm getting at except he didn't fully get at it:
0 <= q <= 1; q:ln(x) = exp^[-q](x)
x {-q} y = q:ln(-q:ln(x) + -q:ln(y))
I should prove some lemmas so we have something to work with:
q:ln(x + x) = q:ln(x) {-q} q:ln(x)
q:ln(2 * x) = q:ln(x) {1-q} q:ln(2)
q:ln(x) {-q} q:ln(x) = q:ln(x) {1-q} q:ln(2)
which by induction becomes
x {1-q} q:ln(n) = x {-q} x {-q} x .... n times
This is why I chose to index these operations with negative values because they do not obey the fundamental law of recursion. They do however behave as addition:
q:ln(x * (y + a)) = q:ln(xy + xa)
q:ln(x) {1-q} q:ln(y+a) = q:ln(xy) {-q} q:ln(xa)
q:ln(x) {1-q} (q:ln(y) {1-q} q:ln(a)) = (q:ln(x) {1-q} q:ln(y)) {-q} (q:ln(x) {1-q} q:ln(a))
which simplified is:
x {1-q} (y {-q} a) = (x {1-q} y) {-q} (x {1-q} a)
Now we have a full ring of operators, exactly as there's addition (+) and multiplication (*) and exponentiation (^), now there's {-q} which behaves as addition, {1-q} which behaves as multiplication, and {2-q} which behaves as exponentiation.
Given this final lemma we can create a new calculus. if S(x) is the identity function:
q:ln(x + 0) = q:ln(x) {-q} q:ln(0) = q:ln(x)
therefore S(-q) = q:ln(0) and therefore there is a pole at S(-1) and there is no identity for operator {-1}
Now we must prove as 0 is to multiplication, S(-q) is to {1-q} (which holds for q=1, since technically S(-1) is infinity and x + inf = inf which is parallel to multiplication by zero equaling zero).
-q:ln(x {1-q} q:ln(0)) = -q:ln(x) * 0 = 0
therefore: x {1-q} q:ln(0) = q:ln(0)
and q:ln(0) = S(-q)
with this, we can now create the fundamental theorem of logarithmic semi operator calculus:
q:d/dx f(x) = lim h->S(-q) [f(x {-q} h) }-q{ f(x)] }1-q{ h
Note that 0:d/dx f(x) = d/dx f(x), and that 1:d/dx f(x) is a pole of negative inf.
Here are some laws, they turn out exactly the same:
q:d/dx f(x) {-q} g(x) = q:d/dx f(x) {-q} q:d/dx g(x)
which is the law that states differentiation is distributable over addition.
q:d/dx f(x) {1-q} g(x) = [q:d/dx f(x) {1-q} g(x)] {-q} [f(x) {1-q} q:d/dx g(x)]
which is the product rule.
The chain rule is applicable, it becomes:
q:d/dx f(g(x)) = q:d/dg(x) f(g(x)) {1-q} q:d/dx g(x)
With this you can see that every operation you do with derivatives, you can do a parallel operation with just "lowered" operator power.
Here comes the sweet part, we can now invent lower operator polynomials and then transform that knowledge into lower operator Taylor series and invent a definition of semi-operator analytic functions
q:d/dx x {2-q} n = q:ln(n) {1-q} (x {2-q} (n-1))
This derivative is a result of the normal power rule proof, but at the end when you collect n amount of x^(n-1), you must convert that to q:ln(n) because {-q} is not proper recursive.
Now we can develop a taylor series, if
X(n=0, y, q) An = A1 {-q} A2 ... {-q} Ay
and q:n! = q:ln(1) {1-q} q:ln(2) {1-q} ... q:ln(n) = q:ln(n!)
q:0! = S(1-q)
q:1! = S(1-q)
f(x) = X(n=0, inf., -q) [S(1-q) }1-q{ q:n!] {1-q} q:d^n/dx^n f(a) {1-q} [(x }-q{ a) {2-q} n]
We can now create the lower operator sine function, the lower operator exponential function which will have the same relationship:
exp_q(x) = X(n=0, inf, -q) (S(1-q) }1-q{ q:n!) {1-q} (x {2-q} n)
sin_q(x) = X(n=0, inf., -q) ((-S(1-q) {2-q} n) }1-q{ q:(2n+1)!) {1-q} (x {2-q} 2n+1)
cos_q(x) = X(n=0, inf, -q) ((-S(1-q) {2-q} n) }1-q{ q:2n!) {1-q} (x {2-q} 2n+1)
Which may look messy at first, is just the normal sin cos exp taylor series with every operator lowered by q and the q:ln taken of the factorial to account for the false recursion in operator {-q}.
if we create J(q) such that
J(q) {q} J(q) = -S(q)
and therefore J(1) = i, and J(0) = 0. In a perfect world J(0.5) = 0.5 * i, but this is false.
J(q) = -S(q) {1+q} 1/2 = -q:ln(q:ln(-S(q))*1/2)
we can now generalize euler's formula to
exp_q(x {1-q} J(1-q)) = cos_q(x) {-q} (J(1-q) {1-q} sin_q(x))
q:d/dx exp_q(x) = exp_q(x), as well
I'm trying to look into extending all of analytic calculus into logarithmic semi-operator calculus. So far it's very difficult to take the "q-derivative" of any ordinary function. And because of a certain law, the lowered exponential function, b{2-q} x, is in-differentiable.
There, that's all of it. I hope some of you have opinions.
PS:
I also wanted to ask, does anyone think that choosing an erratic base for the operators might make the transition of 2 {x} 3 more smooth unlike how it is now?
It's a shame really because every other transformation is very very nice, like f(x) = 1 }1-q{ x, and varying q bit by bit.
0 <= q <= 1; q:ln(x) = exp^[-q](x)
x {-q} y = q:ln(-q:ln(x) + -q:ln(y))
I should prove some lemmas so we have something to work with:
q:ln(x + x) = q:ln(x) {-q} q:ln(x)
q:ln(2 * x) = q:ln(x) {1-q} q:ln(2)
q:ln(x) {-q} q:ln(x) = q:ln(x) {1-q} q:ln(2)
which by induction becomes
x {1-q} q:ln(n) = x {-q} x {-q} x .... n times
This is why I chose to index these operations with negative values because they do not obey the fundamental law of recursion. They do however behave as addition:
q:ln(x * (y + a)) = q:ln(xy + xa)
q:ln(x) {1-q} q:ln(y+a) = q:ln(xy) {-q} q:ln(xa)
q:ln(x) {1-q} (q:ln(y) {1-q} q:ln(a)) = (q:ln(x) {1-q} q:ln(y)) {-q} (q:ln(x) {1-q} q:ln(a))
which simplified is:
x {1-q} (y {-q} a) = (x {1-q} y) {-q} (x {1-q} a)
Now we have a full ring of operators, exactly as there's addition (+) and multiplication (*) and exponentiation (^), now there's {-q} which behaves as addition, {1-q} which behaves as multiplication, and {2-q} which behaves as exponentiation.
Given this final lemma we can create a new calculus. if S(x) is the identity function:
q:ln(x + 0) = q:ln(x) {-q} q:ln(0) = q:ln(x)
therefore S(-q) = q:ln(0) and therefore there is a pole at S(-1) and there is no identity for operator {-1}
Now we must prove as 0 is to multiplication, S(-q) is to {1-q} (which holds for q=1, since technically S(-1) is infinity and x + inf = inf which is parallel to multiplication by zero equaling zero).
-q:ln(x {1-q} q:ln(0)) = -q:ln(x) * 0 = 0
therefore: x {1-q} q:ln(0) = q:ln(0)
and q:ln(0) = S(-q)
with this, we can now create the fundamental theorem of logarithmic semi operator calculus:
q:d/dx f(x) = lim h->S(-q) [f(x {-q} h) }-q{ f(x)] }1-q{ h
Note that 0:d/dx f(x) = d/dx f(x), and that 1:d/dx f(x) is a pole of negative inf.
Here are some laws, they turn out exactly the same:
q:d/dx f(x) {-q} g(x) = q:d/dx f(x) {-q} q:d/dx g(x)
which is the law that states differentiation is distributable over addition.
q:d/dx f(x) {1-q} g(x) = [q:d/dx f(x) {1-q} g(x)] {-q} [f(x) {1-q} q:d/dx g(x)]
which is the product rule.
The chain rule is applicable, it becomes:
q:d/dx f(g(x)) = q:d/dg(x) f(g(x)) {1-q} q:d/dx g(x)
With this you can see that every operation you do with derivatives, you can do a parallel operation with just "lowered" operator power.
Here comes the sweet part, we can now invent lower operator polynomials and then transform that knowledge into lower operator Taylor series and invent a definition of semi-operator analytic functions
q:d/dx x {2-q} n = q:ln(n) {1-q} (x {2-q} (n-1))
This derivative is a result of the normal power rule proof, but at the end when you collect n amount of x^(n-1), you must convert that to q:ln(n) because {-q} is not proper recursive.
Now we can develop a taylor series, if
X(n=0, y, q) An = A1 {-q} A2 ... {-q} Ay
and q:n! = q:ln(1) {1-q} q:ln(2) {1-q} ... q:ln(n) = q:ln(n!)
q:0! = S(1-q)
q:1! = S(1-q)
f(x) = X(n=0, inf., -q) [S(1-q) }1-q{ q:n!] {1-q} q:d^n/dx^n f(a) {1-q} [(x }-q{ a) {2-q} n]
We can now create the lower operator sine function, the lower operator exponential function which will have the same relationship:
exp_q(x) = X(n=0, inf, -q) (S(1-q) }1-q{ q:n!) {1-q} (x {2-q} n)
sin_q(x) = X(n=0, inf., -q) ((-S(1-q) {2-q} n) }1-q{ q:(2n+1)!) {1-q} (x {2-q} 2n+1)
cos_q(x) = X(n=0, inf, -q) ((-S(1-q) {2-q} n) }1-q{ q:2n!) {1-q} (x {2-q} 2n+1)
Which may look messy at first, is just the normal sin cos exp taylor series with every operator lowered by q and the q:ln taken of the factorial to account for the false recursion in operator {-q}.
if we create J(q) such that
J(q) {q} J(q) = -S(q)
and therefore J(1) = i, and J(0) = 0. In a perfect world J(0.5) = 0.5 * i, but this is false.
J(q) = -S(q) {1+q} 1/2 = -q:ln(q:ln(-S(q))*1/2)
we can now generalize euler's formula to
exp_q(x {1-q} J(1-q)) = cos_q(x) {-q} (J(1-q) {1-q} sin_q(x))
q:d/dx exp_q(x) = exp_q(x), as well
I'm trying to look into extending all of analytic calculus into logarithmic semi-operator calculus. So far it's very difficult to take the "q-derivative" of any ordinary function. And because of a certain law, the lowered exponential function, b{2-q} x, is in-differentiable.
There, that's all of it. I hope some of you have opinions.
PS:
I also wanted to ask, does anyone think that choosing an erratic base for the operators might make the transition of 2 {x} 3 more smooth unlike how it is now?
It's a shame really because every other transformation is very very nice, like f(x) = 1 }1-q{ x, and varying q bit by bit.

