That an n degree polynomial CAN always be factored up into n factors containing Complex roots has been shown true, but there are things not explained in the phrase.
Teachers will not usualy explain alternatives.
Say (Z-x)(Z-x) =0 CAN have the solution Z=x, but can also have Z=x+ey where ee=0 is an
nilpotent element.
Just as i can be made real in 2 by 2 matrices, so can e. Just as we use i or -i dupliciously,
without knowing which, we can use e or e(T) , the transpose of e. One can define i=e-e(T)
using ee=0 , e(T) e(T) =0 and e e(T)+e(T)e =I to show ii=-1. Now substitute this everywhere you see i, and the factorization carries out the same.
Defining ww=1, one can also claim w=e +e(T).
Reading attentively the fundamental theorem of algebra, we find the funny phrase that it is not derivable from algebra...well no wonder.
Dont really have a good name for these elements, i plus whatever else there is. The supra real? Hyper real is already occupied by non standard elements. Neither have they seemingly been well studied well beyond quadratic, though they do exist...
This means a gap in knowledge? Or stuborn historical usage?
1. If K has a field algebraic structure, then K do not contains notnull nilpotent elemets.
Proof. Suppose that we have a nilpotent element e(as you denoted) in K. It results that ee=0. We denote by 1 the unit element in K. K is a field, therefore we have e-1 in K such that ee-1=e-1e=1. From ee=0 we obtain successively
e-1ee=0, 1e=0, e=0.
2. Fundamental theorem of algebra: An n degree polynomial WITH COMPLEX COEFFICIENTS can always be factored up into n factors, each of them corresponding to a complex root of the polynomial.
Factually, above statement is not Fundamental theorem of algebra. It's just a consequence of it!
The correct statement is:
An n degree polynomial WITH COMPLEX COEFFICIENTS admits exactly n complex roots, not necessarily distinct.
Therefore, we consider only polynomials with complex coefficients in this statement and the set C of complex numbers is a field, then do not contains nilpotent elements.
The set M2,2 of all 2 by 2 real matrices is not a field, M2,2 has a ring algebraic structure, because we can easy find not invertible matrices in M2,2. Also, M2,2 contains nilpotent elemets.
Post Scriptum: When you talk about mathematics, it is good to study intensively and completely the notions and results before issuing opinions!
Dinu
You are correct. An expression x+ey would be a ring ,not a field, containing a non invertible element ey which is nonzero. Whereas x+iy is a field.
However such conception draws from the idea that rings compared to fields are somehow useless, idea which I disagree with. This is an implicit conception in many calculus books.
For example x+ey is invertible whenever x is not zero, almost all the (x,y)
plane. Ie. in many cases most ring elements are invertible.
You are correct also about complex coeficients.
Yes a matrix is singular as soon as the determinant is zero.
That does not invalidate cases of matrix calculus.
So you criticize aspects I have not mentioned.
Dear Juan,
It is untrue that mathematics seem to consider rings as useless.
Yes, in functional analysis we consider vector spaces endowed with some topological structure, and such spaces are vector spaces over a field( R or C; rarely other fields), so we are not dealing with rings. Also, calculus doesn't need rings.
But the notion of ring is very important in algebra. In the last 50 years, different branches have been studied such as: non-commutative rings, normed rings,...
We have also the Theory of Modules( module over a ring extends the notion of vector space)!
@Juan_Weisz asks an interesting question, which deserves a paper to fully answer, since it suggests an expansion of the Complexes, C. It is another way of looking at the possibilities offered by the infinitesimal. It also reminds me of when I introduced to C, an epsilon operator, ep so that (ep)×(ep) = ep, and realized it defied the Fundamental Theorem in its flip about y = x. I later learned that Charles Muses considered such, not as operators, but as hypernumbers. Such numbers expand math possibilities but at the expense of violating some property we might hold dear, as moving from C to H, the Quaternions, cost/lost us commutativity. We are now dealing in an expanded algebraic system.
In exploring Muses, I appreciated that each new hypernumber expands our consciousness in some sense, just as "i" says: "i'm possible, not impossible!". In so doing, "Little-i" made the Complexes algerbraically complete/closed, beyond the Reals.
Your findings to date intrigue those of us who look beyond the "accepted", into other possibilities. As you said, @Juan_Weisz, this "means a gap in knowledge". I look forward to paper on this topic to fill this gap.
For those interested, see" functions over commutative ring systems. "
on my RG page.(A write up summarizing a lot of this stuff)
The nilpotent element is actually an ideal in the ring algebra.
One notion that turns up, the existance of special sets of matrices, all elements of which commute among themselvs, the numbers may be treated as eigenvalues of such. ie. circulant matrices.
Identities emerging from the form of such matrices guide generalized
Cauchy Riemann from the form of a Jacobi matrix.
I would add my 2 cents to this discussion.
As far as I understand, the spirit of initial question is: Why extending real numbers we reach to complex numbers by adding the number i with property i*i = -1. Why we cannot do the same with the number e with property e*e = 0.
The (partial) answer is already given: Because only i number extends reals to the field of complex numbers, while adding e to reals does not give a field. Well, the next question is: Why do we need a field as an extension of reals? And the only answer I found so far is: Because real numbers form a field, and extending this field it is a good idea to reach at leas a field.
This is true. But depending on the motivation and on the objectives, adding a nilpotent number e can create a very competitive algebraic structure. We can ask ourselves: If 0 has no inverse element, why should it be the only one with this property? We can then construct dual numbers (https://en.wikipedia.org/wiki/Dual_number) by adding a nilpotent element or split complex numbers (https://en.wikipedia.org/wiki/Split-complex_number) by adding the number j with the property j*j = 1, but j =/= 1.
I know at least one very interesting application of such numbers in geometry. They are used to construct models of non-Euclidean geometry. You can find interesting the book "A simple non-euclidean geometry and its physical basis (1979)" by Isaak Yaglom.
Alexandru
Thanks for your contribution.
The only difference I can see between ring and field is the absense of
noninvertible nonzero elements in a field.
However in rings you can find large zones which also lack these noninvertibles.(in the dual or hyperbolic cases). There are stright lines in the plane with noninvertibles, ie. x=0 in the dual system.
So as far as these zones, there is no difference.
There are cases in relativity which can use hyperbolic numbers. (tt=1)Probably we talk about the same thing.
John
What you describe seems to be either a projection operator or an indempotent element. TT=T
Try T to be the matrix
0 0
1 1
and a ring of the form M=x I +y T=
x 0
y x+y
verify form preservation under addition and multiplication, and commutation
under multiplication of 2 different samples of the same.
Cannot vouch for the uniqueness of this representation.
Eigenvalues from xx=x , or the matrix are 0 and 1
Juan, re: TT = T. Yes, my ep operator, ep(x, y) = (y, x) was idempotent under composition: ep o ep = ep. But when treated as a number, ep x ep = 1, as in the split-complexes. My point was Not about ep, but about the resulting system when you added Z to C, where ZxZ= 0. New algebraic ystem < C u {Z} > is no longer a field, and we don't have an alternative Fundamental Theorem in C, but a new theorem in the New system. That's all I am trying to say.
John
I now agree that just adding things like e to a Compex system is akward at best.
Thats why I propose a pure e like albebra (the dual ring algebra), in at least limited cases;
Note the interesting case of f(x+he) = f(x) +he Df
which pretty much defines Df as obtained in a truncated Taylor expansion
h.....>he or can be verified in algebra dealing with all the most comon functions.
I think you still can build C+eC similar to R+eR, but there would be non commutivity between e and i (e e(T)+e(T) e =I). However I saw no reason Complex analysis could not be carried out this way.
Im more concerned about things like commutivity than a possible difference between fields and rings with invertible elements.
My vision about the origin of such numbers in relation to polynomial equations is still
a clouded affair.(except for a number of clear examples)
Juan,
Sorry, but if Complex Analysis would work the way you say, replacing the number i with something else that has another property, then it would no longer be called Complex Analysis!
Cauchy-Riemann relations can be obtained only if we use the fact that i2=-1, meaning that if we consider something of the form R+eR and e2 is not -1, Cauchy-Riemann relations are no longer valid.
In Complex Analysis we use C=R+iR as an independent set of numbers with special properties and the field structure of C has not great importance.
But algebraically, the field structure of C is very important and also important is the fact that R is a subfield of C( C is an extension of R).
Can be C also extended( considering rings of the form C+eC, with e outside C !!)? Mathematically yes, but why?
Dinu
The replacement I suggested is a correspondance i=e-e(T)
The matrix for i is
0 -1
1 0
So a complex number goes as
x -y
y x
Now e can be represented by
0 0
1 0
Then the transpose is
0 1
0 0
So e-e(T) =i
in matrix terms. You find also e and e(T) do not commute.
e e(T)+e(T) e =I
ee=0
e(T) e(T)=0
That is the equivalence I suggest.
Understand now?
Yes, I understand! But why we need some construction? To model what?
The construction using i is natural, because we consider i as one of the roots of equation x2+1=0. And such extension offers the possibility to have a complete theory of polynomial equations.
Why would I want C+eC ?
For the same reason you get the derivative Df(x)
using R+eR
f(x+ehy)=f(x)+he Df(x)
instead of x use a complex function f(x+iy) in Complex analysis.
Try it out with algebra, on a diversity of functions.
A jacobi matrix of partial derivatives of (f1,f2), components of f(Z) with respect to (x,y) Z=x+iy, patterned as the same form as
x -y
y x
gives Cauch Riemmann relations by reason of structural conditions alone.
I have found this a cross system property. Indeed it depends on ii=-1
You get different CR for different systems.
In such rings you propose to work, we have zero divisors: not null a,b such that ab=o.
You talk about the notion of differential of a function. The notion is based on the limit phenomenon( convegence in fact).
Working as you propose, we can get to the following situation: Having a sequence xn converging to z, z being a zero divisor and all xn not zero divizors. If z is zero divisor, we cand find w such that zw=0. But we have normally that sequence (wxn) converges to wz=0 and so we obtain contradiction with fundamental properties of convergent sequences. It results that the differential can not more be well defined!
For physicists, I think is better to study seriously 3 or 4 years of math, before trying study the Universe structure! We can not change a math theory so easy! It exists the risk of inconsistence, the risk of working with contradictory statements!
Dinu
For x+ey a natural equation in Z=x+ey is
(Z-x)(Z-x)=0
For Z= x+ty, tt=1
(Z-x)(Z-x)=yy
For Z=x+iy
(Z-x)(Z-x)+yy=0
The three cases, x+ty, tt=-1,0,1
have natural equations.
As far as applied math goes, it took a long time to find applictions
to Complex variables, the same would be the case in other systems.
Since I dont like infinitesimals I find e to play a good role there.
Want a commutative ring system supporting 3 variables (impossible for fields),?
try Z=x+ty+ttz with ttt=1 or -1
again the elements are invertible over most of (xyz) space. It has its own CR
Again, in these rings, most elements are invertible; there are only straight lines you stay away from. In the dual system you avoid the vertical line x=0. A log is only defined by x>0 (not all values).
ab=0 with both a and b not zero only happens with matrices.(or two e term) However this language is or can be expressed as scalars, no confusion is possible. ee=0 is the rule, same as ii=-1
log(x+ey) = log x(1+ey/x) = log x + ey/x
thus 1/x is the derivative of log(x)
1/(x+ey) =( x-ey)/xx
-1/xx is the derivative of 1/x
Yes, we can construct some structure. Let t be a root not equal to 1 of equation x3-1=0. We define the set A={x+yt+zt2 / x,y,z real numbers}
But x3-1=(x-1)(x2+x+1), then t2+t+1=0 and it results that t2=-t-1. Therefore, in fact, an element of A is of the form a+bt, with a,b reals.
Moreover t is (-1+isqrt3)/2 or ((-1-isqrt3)/2, hence every element of A is a complex number.
In math A is denoted R(t) i.e. the field obtained by addition of t to R and this field is exactly the field of complex numbers. A and C are the same thing! Easy to prove!
PS you dont like infinitesimals, but without infinitesimals, farewell to differential equations, PDE, and all equations from physics!
Dinu
Well, you are obviously biased to the limit definition, most of us were taught this way, as you are to the field concept. However I think a few conceptual barriers arise because of this.
Even historically there is the Lagrange definition, basically what is contained
in the second term of a Taylor expansion.(which is in agreeement with the usual)
I already showed you how this works for ln or 1/x
Take the case xx
(x+ey)^2= xx+2eyx
Hence 2x is the derivative of xx
It is truncation of Taylor.
Use the binomial expansion for any positive or negative power of x, get correct result, or prove all properties of a derivative.
You never have to think about very small nonzero numbers! It is algebra!
Ideal of ring algebra, not infinitesimal!
You are correct, that alternatively everything CAN work out according to complex algebra.
It looks different.
Juan Weisz , there exists an isomorphism from the field of complex numbers x+iy (with the usual addition and multiplication) to the field of 2x2 matrices of the form
x y
-y x,
(with the usual addition and multiplication of matrices). If I have understood your reasoning correctly, you describe a ring that contains a field that is isomorphic to the previous field of 2x2 matrices which, in turn, is isomorphic to the field of complex numbers. This, I would say, is neither surprising nor (so) important.
Spiros
No, you either have a field, or a ring.
What you describe above is the Complex field, with isomorphic matrix.
The dual ring is x+ey, with ee=00
that in turn has a matrix
x 0
y x
isomorphic.
I claim that
x+ty+ttz with ttt=1 or -1
is also a ring but with 3 variables. It has a cyclic or circulant matrix isomorphic. This kind of matric commutes with others of the same form, the above gives eigenvalues.
Perhaps people will help me understand the exact wording of the theorem
that prevents a field with 3 independent variables. It is obviously possible
for a ring, then a kind of generalization.
What dinu sais is that by expressing the cube roots of unity, replacing, we are back to Complex language. I hate to think what kind of structure that would be.
Ha,ha, what is a Physicist doing dabbling in this kind of math?
When I started over 15 years ago, I knew nothing about abstract algebra,
but I learned. I always felt there was no real distance between algebra and calculus, thats my main motivation.
I do admit most results were already around to find, that is why it is really hard to do new stuff. I would really have to do many citations. Most cannot see what I do, at first.
Well, you can have a Complex system looking something like
(x-y) +i(y-z) but you would not have really three independent variables.
In the ring you do.
Juan,
To have or not three independent variables in a ring A, where A is a subset of
C={a+bi / a,b belong R}, is not important at all.
Clearly such ring has not zero divisors(it's clear why), Z[i]={a+bi / a,b belong Z} being an example, where Z={...,-3,-2,-1,0,1,2,3,...}.
Let us talk again about your question! Can we obtain an alternative fundamental theorem of algebra for polynomials with complex coefficients? The answer is NO!
The correct question should have been: Can we obtain a theorem, similar to fundamental theorem of algebra, if we consider polynomials with coefficients in a ring A, the ring A containing zero divizors( i.e. not nul elements a,b with ab=0, including here not nul elements c with c2=0)? Basically no, but it depends on what you mean by "similar"! You can get something weaker, without any uniqueness properties.
You can answer this question yourself by studying the theory of polynomial rings with coefficients in a ring with zero divisors.
To well define differentials for functions f:E->F we need topological structures on E and F, compatible with the algebraic structures of E and F. More accurately, in fact, we need normed structures on E and F.
Please define what it means " Complex system" ! In mathematics do not exists such expression!
....including here not nul elements c with c2=0...to correct a mistake in above comment!
Dinu
I wont argue too much on the polynomial equation generating numbers.
You should at least see that in some cases polynomial equations suggest numbers other that i, and that are not real.
So some elements of caution should be added to fundamental theorem of algebra.
Im satisfied that as a theorem, that as far as it actually covers and states it is correct.
Will think about what you say for rings.
By complex system I simply mean expressed in Complex numbers.
In the same vein Im sometimes tempted to talk about the dual system or Hyperbolic system (tt=1)
The only norm in the dual system is xx or absolute value of x. You just avoid x=0. It is just about the same as calculus of 1 real variable.(What norm you use there?). You have (x,y) structure or (f(x), y Df(x)) or as a matrix
f(x) 0
yDf(x) f(x)
If (u,v) CR is u,x = v,y and u,y =0
for any function f(x+ey)=u+ev. A natural property of analytically expressed functions.
Well, if you can or want, talk about fields(commutative) as limited to 2 variables.
Regards, Juan
Juan,
What do you understand by "variables"?
A field of the form Q(sqrt2)={a+bsqrt2 / a,b rationals} is "good"? How many "variables" has it?
To respond you, I'm not referring to rings whose elements are necessarily numbers. I refer to rings having zero divisors: matrices rings, Zp rings with p not prime, .... And I refer too, to polynomials having elements of a ring as coefficients; mathematics can easy refer to polynomials having matrices as coefficients and as roots. No problem!
Sorry Juan, but the language used by you has no meaning, it has nothing to do with the rigorous and clear language used in mathematics!
Let's consider the polynomial X2-5X in Z6, having 4(four) roots: 0,2,3,5.
In this example, we understand 0,2,3,5 as classes modulo 6.
So, what fundamental theorem of algebra can we state in the ring Z6?
Dinu
Really, I thought Im using language of really comon usage, even in high scool.
Function of one variable f(x)
real Calculus of one variable.
Function of two variables f(x,y)
Calculus of more than one real variable, etc
Complex Analisis comonly using 2 real variables f(Z) = f1+i f2
where Z=x+iy a compex no. But f1 and f2 are functions of x and y.
For example for f(Z)=ZZ
f1=xx-yy f2=2xy
All my examples involve nice continuous real variables, and you suddenly define sets of discrete integers modulo 6. But anyway, yes, a variable can be discrete.
Variable: Quantity used in algebra with a variable value. Can be in R, C,
N, or whatever. Can be expressed in functions or equations.
For your polynomial we can state there are roots in N. The number of these
I cannot immediately compute, but you give 4, not agree here with degree of polynomial.
The statement of the initial problem I gave is rather vague, too far from any solution yet. Perhaps impossible to realize. I do recognize that.
You cannot do algebra if you cannot grasp variables! Probably your confusion, my fault or not, is elswhere.
But I also know rings need extra precautions.
Allow me to at least state an inverse of x+ey when x is not zero.
Perhaps to clarify, in my last post I developed a function of a Complex variable as f(Z)=f1 +i f2.
I was doing the exact same thing with the dual system
f(x+ey)= u+e v
except in this case u and v are really easy to compute in general. The structure is important to Calculus.
In case you missed it u=f(x) v=yDf(x)
when f(x)=x you are back to x+ey
John
A mistake slipped past both of us.
Two exchange operators in succcession give the identity, not exchange again
See this with a matrix
0 1
1 0
acting on column vector
x
y
Then this would be a hyperbolic representation, not an indempotent one.
(TT=I)
A prior post of mine gives the correct for indempotent.
Dear Dinu Teodorescu , your argumentation remembers me the long and sarcastic discussion in the first part of 19th century about hyperbolic geometry. Why do we need "imaginary geometry" when we already have real Euclidean geometry? Very many apparent contradictions were found when it was considered with Euclidean expectations, one of which being that the equidistant line should be straight. The true uselessness of imaginary geometry would be proved only when it were demonstrated self inconsistent without any mention of Euclidean geometry. When models of hyperbolic geometry were constructed from elements of Euclidean geometry, it was accepted somehow. Still, Euclidean geometry is now "the main geometry" and all the rest are "non-Euclidean geometries". There is no reason for this other than historical. If you consider physical space, Euclidean geometry is your best friend. If you consider Newtonian space-time, Galilei geometry best describes it. If you consider Einsteinian space-time Minkowskii geometry is the one to consider. If you consider Einsteinian space of velocities, hyperbolic geometry is the right one. If you consider earth surface, elliptic geometry best fits for the purpose.
Now, I see you try to demonstrate that complex numbers are "better" than dual numbers because they form a field while duals are "only" a ring. Well, as well you can introduce a new word to describe special properties of dual numbers, say "dual field" defined differently than field. Now we can say: "Dual numbers are better that complex numbers because they form dual field". Do we really speak about "dual numbers instead of complex numbers" or we are speaking about "dual numbers as different base of algebra than complex numbers"? The argument that dual numbers are useless is already null, because they model another non-Euclidean geometry, the Galilei Geometry.
Even if we speak strictly about algebra, the prejudgements are dangerous here. With the risk to be far off-topic, I would add one more example. The usual numeric tower (N -> Q -> R -> C -> H -> O) is constructed with Peano's axioms in its base. One of them is Archimedes' axiom: "If K is a set such that 0 is in K, and for every natural number n, n being in K implies that its successor is in K, then K contains every natural number". By replacing this axiom with its negation we obtain non-Archimedean natural numbers in a very similar way with how non-Euclidean geometry is constructed. These non-Archimedean natural numbers have a whole hierarchy of infinities. If we denote non-Archimedean naturals as N' and construct the new numeric tower we obtain N' -> Q' -> R' -> C' -> H' -> O'. The numbers starting with rational have the whole hierarchy of infinitesimal numbers (or zeros). Bonus: the infinitesimal apparatus on non-Archimedean numbers is much easier than "epsilon-delta", because infinitesimal numbers are now true numbers, not just abstractions. So, the question is: What is a number? There is no reason to prefer Archimedean algebra over non-Archimedean algebra other than historical one.
The very same "alternative basis" is also possible (even at larger extent) in set theory and in all fundamental basis of mathematics.
See https://en.wikipedia.org/wiki/Nonstandard_analysis for analysis with non-Archimediean numbers.
Dear Alexandru Popa , I have nothing against dual numbers, but my opinion is the degree of applicability of constructions based on dual numbers is very narrow.
Clearly we can speak only about "dual numbers as different base of algebra than complex numbers", not about "dual numbers instead of complex numbers", as Juan Weisz seems to propose! And we also have a problem of the type: Can we talk about dual numbers as long as we do not master or understand the fundamentals of traditional algebra?
No, I wont steal complex numbers from you or anyone else!
Nowadays they even speak primary vectors a1 dx +a2 dy +a3 dz
and secondary a1 dx dy + +a2 dx dz +.....Elie Cartan kind of stuff.
Mathematics is getting more complicated.
The story started with Grassman and Peirce,followed by people like Clifford. Systems of Hypercomplex numbers is comon terminology. Read Bell on history of mathematics.
Dual numbers is about the simplest of Grassmann algebra!
Dear Juan,
What you try to show is that mathematics is already very complicated when we start using first and second order differential forms( not vectors). Maybe so, but no one is forced to understand and use such things!
And if we construct another mathematics, based on dual numbers( rings of dual numbers instead fields of numbers), how can we be sure that the notions we obtain as similar to differential forms will be simpler and more usable?
Dinu
You think that the following properties apply to differentials?
dx dx=0 and dx dy+dydx=0
because those are the algebra rules to use the above expressions.
It is pure algebra, not too complicated.
But call them whatever you like.
Try the vector product through simple multiplication.
Welll , only history shows the ultimate usage, as when Gibbs invented his vectors that became of current use. At least you do 3 things currently accepted, addition, scalar product and vector or wedge product.
Juan Weisz
Please look at:
https://en.wikipedia.org/wiki/Differential_form
https://www.math.purdue.edu/~arapura/preprints/diffforms.pdf
https://mathworld.wolfram.com/Differentialk-Form.html
By dx and dy I understand two elements of L(R2,R) which are defined as follows:
dx(x1,x2)=x1; dy(x1,x2)=x2
Moreover the set {dx,dy} is an algebraic basis of vector space L(R2,R). The elements of L(R2,R) are first order differential forms in two variables. If f=f(x,y) is a real differentiable function, then the differential of f in a point (a,b), i.e. df(a,b) is an element of L(R2,R).
The other reason I like to talk about vectors, or specifically vector fields
is that vector analysis is done quite elegantly in E. Cartan language.
For example the differential of a one form give the components of the curl or rotor
of the vector in a dx dy, dx dz, etc basis.
The proceedure of Cartan is generalizing in that there does not seem to be a limit on the size of the vector. The double differential of a one form is zero.
There are sources now giving Maxwell theory in this language.
By the way field is also a (different) term in Physics, where in a given space different vectors are assigned at different points in the space.
Juan,
You talk about notions that you do not understand at all, and make unacceptable confusions.
Perhaps you should get more specific, I do not claim total perfection,
but at least in physics I do have a long and successfull carrer.
Vector fields are relevant to vector analysis, otherwise the grad, curl, etc would all come out zero. I am rather expert in certain math branches, you cannot be in all of them.
But please use the simplest explanationary language possible. I can give you references.
For example a one form is
Pdx +Qdy
differential of
(P,1dx+P,2dy)dx+ (Q,1 dx+Q,2 dy)dy
= (P,2 dydx +Q,1dx dy)
=(Q,1-P,2)dxdy
Look like component of curl?
So the line integral over the one form is the surface integral over the curl like component.(Green theorem for a plane)
Dear Juan,
1. It's a good job that a physicist wants to know and understand some specific mathematical theories at a higher level. Congrats for that! But as mathematics requires, all knowledge in the domain must be rigorously presented and explained when we talk about.
2. On Green Theorem
If X is a topological space, for a subset A of X we define the closure of A, cl(A), and the interior of A, int(A). Also we define the border(frontier) of A, fr(A)=cl(A)-int(A).
For a subset D of R2(the plane, which is a topological space), if D is bounded and int(D) is non-empty, fr(D) is a curve in the plane, bordering the set D.
If P=P(x,y) and Q=Q(x,y) are two differentiable functions defined on cl(D), we have:
line integral on fr(D)(Pdx +Qdy)=double integral on D[(Q,1-P,2)dxdy].
This is known as Green Theorem or Green-Riemann formula.
(Q,1-P,2)dxdy is the differential of the form Pdx +Qdy, as you correctly said in above comment.
Yes, all is ok now!
The Green Theorem is a result of Stokes type. The curl is a non-sense here, because, working in the plane, we have only two variables.
3. Taking into account the above disscution, I ask you, what do you understand by dx and dy in algebraic language?
Please understand, you are an ok person, but when we talk about mathematics I requires that all be rigorous, clear and correct! Without rigor we don't have mathematics, only fairy tales!
Cordially,
Dinu
Well different languages can deal with similar objects in ways that seem different, but there is considerable overlap.
The original differentials of calculus are something like very short portions of the real line dx or dy; so one would never think (dx)(dx) is ever exactly zero. In abstract algebra you do this; unless they are very short parallel vectors, with wedge product, then it is zero.
Grassman writes x1 e1 +x2 e2 +...+xn en
where the ei are abstract elements. Here it would be eiei=0
ei ej+ejei=0 if i and j differ by one unit.
The rules imposed are to make it an associative linear algebra. I usually like commutivity as well. That is how hipercomplex numbers developed.
If you add a dependence on spatial coordinates to x1, x2, etc then you can have similarity to vector fields.
Also then you try to examine functions over these, ie. f(x1 e1+x2e2) to see what results under different rules of the ei
Yes, physicists almost never try to actually prove things, but tell intuitively if they are on the right track. I suppose that generates some reliance on mathematicians.
What they call geometrical algebra is like a giant reservoir incorporating
many different of these outlooks. To use it requires a lot of comon sense,
Modern confusion is the overlap of different languages. That is how I see it.
How do you see it?
What can I say? In mathematical usual language, the notation e1,e2,...,en is used to denote the elements of a basis in an "n"-dimensional vector space, and we have eiei=1, eiej=ejei=0 ( for i =/=j), basis being called orthonormal.
Grassman writes x=x1 e1 +x2 e2 +...+xn en to describe the decomposition of a vector x relatively to a orthonormal basis {e1,e2,...,en} of a finite dimensional Hilbert space. The results of Grassman was mainly obtained in the domain of classical theory of Banach and Hilbert Spaces.
If we consider eiei = 0, then geometrically, it would be as if we consider that the vector ei is orthogonal to itself!
Many constructions can be developed, but, as I said in a previous comment, they do not produce fulminant results, they are not applicable and there is a risk of reaching inconsistent and/or contradictory results.
In mathematics we can not accept results that can not be proved!
Juan Weisz Clearly, if we don't use a common language, it exists the risk nobody understands nothing!
ei is equal to ei , same vector, no chance they are orthogonal, because then they would be different. Perhaps parallel with vector product. Then eiei=0
Depends, I often understand general descriptive language better than technical math notation. That is one thing where you guys are better at than I.
Clearly the geometry of an orthonormal basis is different to the algebra we
were talking about previously, with ei.ej = delta(i, j) equal 1 when i=j Then you use the inner product.
The history I have read si Grassmann studying such forms to generalize the kinds of algebra you may have..
Some of my ideas I come to binary or higher numbers with some ideas that come from the math of QM. If you imagine the nmbers displayed as a whole set,
and you happen to use some particular number, the tacit assumption you make is that you could simultaneously use any other number of the same set with the same confidence. Im playing around with the idea as numbers as eigenvalues.
QM only recognizes results of a valid measurement the eigenvalues of quantum operators, could be matrices. So one finds three or four number systems where the numbers are in fact eigenvalues, and the operators and numbers commute with each other, for different values of the parameters and the same essential form. So this seems right on track, with regard to these systems.
QM warns you that if the operators or matrices did not commute, then the matrices are not simultaneously diagonizable by the same transformation
and simultaneous eigenvalues do not exist. The result means the uncertainty in QM, In math the inability to define a complete number system unless conditions are right.
To iilustrate with Complex nos.
x -y
y x
has eigenvalues x+iy and x-iy
Seems reasonable?
Yes, it seems reasonable! But we must remark that, without a consistent theory of complex numbers, we can't approach the notion of eigenvalue of an operator(of a matrix).
I would object against the attitude "let mathematicians do the math". Very many truly useful mathematics constructions were developed in somehow informal language by engineers and physicians. The very existence of Galilei space-time shows it is free of contradictions. As soon as dual numbers model this geometry, they are as self-consistent as the geometry itself.
As of bases of mathematics (number theory, set theory, geometry, analysis, etc) no such base can be proven as consistent. This is one of famous Hilbert problems, which was resolved (negatively) by Godel.
Dear Alexandru Popa , a physician is a doctor. I think you mean physicists. If this is so, I would like to add that there exist cases where new mathematics has been developed during the formulation of quantum field theories. An example of this is the work of Edward Witten.
However, a minimum amount of knowledge in mathematics is necessary in order to develop new ideas, even in an informal and non-rigorous way. For instance, the ability to make clear mathematical statements is essential.
Yes, I know there are deep problems such as any arithmetical system is incomplete, because always new questions can be poised for which there is no answer. O if a given answer is given, the field splits to generate new questions.
This sort of problem is probably way beyond the capability of most, I suspect, and is widly ignored by the community. A few dedicated specialists
perhaps attempt this.
See if someone can figure out what goes on with this
Give some not very trivial identity like
xx-yy=(x+y)(x-y)
Take the total differential of both sides
2x dx -2y dy = (dx+dy)(x-y)+(x+y) (dx-dy)
Now ignore dx, dy and just replace by u and v, new variables
Get
2xu-2yv = .....
Now you just generated a new generalized identity with 4 variables!
Keep on doing the same thing as much as you like!
Well, we all knew there is no upper limit to complexity...
I was ignorant about a relation of dual nos. and the Galilean transformation.
Perhaps you can write
xp=x-vt
tp =t
as such a matrix form
x 0
y x
in some way. Looks similar.
All those periods of doubt and rigorization of math, has left the practitioners with a very heavy technical baggage, and it is hard to distill for outsiders.
It is not clear if that baggage is 100% of use.
On the question of ring, one definition never mentions even the possibility of inversion, another sais inversion is not for all elements. Still another approach sais you cannot have any two nonzero elements, equal or not to each other, which multiplied together give zero, and have a domain of integrity.(to confuse you with one more definition).
Of course numbers 1,2,3,... are invertible , but this does not count, because the result is out of the domain of discussion, so this is also a ring.
So I still insist, what theorem prevents commutative fields not to exist with more than 2 independent variables. ie. the famous "beyond Complex numbers you cannot go" I suppose it sais that and not something else.
Dinu, others
Equiivalence of algebra and the Calculus is the point of using dual algebra.
Thats why I like it. No old fashioned infinitesimals in it.
Juan,
It's impossible to build a theory of differentiation or a theory of integration without having an algebraic structure on R(in case of real functions of one real variable) or on Rn(in case of real functions of "n" variables) or on C(in case of complex functions) - clearly I talk here about the structure of field for R and C and the structure of vector space for Rn.
That do not means algebra and calculus are equivalent. That means compatibility between the algebraic structure of R, Rn, C and the theory of differentiation(integration) for above mentioned functions.
If we use dual algebra, we must build another theory of calculus for such functions. And this theory of calculus has other rules! For example the Cauchy-Riemann relations are not more valid, as we know them now.
The question is: Why working using dual algebra? Does this bring any benefit?Or the properties we obtain in this context are poorer and less applicable!?
The way I see it, I do exactly what is demanded in Calculus, in a different way.
Well lets talk about the Calculus of fuctions of one real variable, the simplest.
I interprate x and y in x+ey as real. In particular x is the real axis.
So say you have f(x) and you want to find the derivative. Of course it should be a differentiable function, or we are in trouble.
Lets take f(x)=xx a simple case to illustrate.
The formula now is take
f(x+ey) =(x+ey)(x+ey)
Now here is your algebra structure
= xx+ey2x
So answer just 2x
y is just arbitrary, different from zero.
Try this structure for any reasonable elemtary function you like.
An integral is just the anti derivative with arbitrary constant.
Do you see anything new to build.? This actually only follows the Lagrange definition.
Well, its pretty much equivalent...Simpler, only need algebra.
All properties of a derivative are met.
Of course a student may ask about the geometrical significance (as a slope)
Then you explain your way.
Of course this is best going along with Taylor expansions, equivalent to algebra if there is no remainder term.(as in x^n)
Please explain better what in Calculus this approach may be missing.
Sorry Juan, but the way you pose the problem reveals a very poor understanding of definitions of mathematical concepts. And in this case, any discussion is useless!
You really think that a Taylor series expansion has any efficiency if the respective "series" has only a few terms? Well, that means you don't know the meaning of Taylor series, i.e. that of approximating by polynomials!
I'm just not going to write a comment that is a mathematical analysis course! And if I did that, I think it would be useless anyway!
Well, any time I do an analysis of of a Taylor expansion of x to the power of n
and compare it to the Binomial expansion to the same power, the same exact result is obtained. So for any power series expansion , finite or convergent, this works fine, and a Taylor expansion does not change its first few terms.
probably you analysts can invent some exorbitant functions where this is not true. A non convergent trascendental function perhaps. We dont use divergent series. I know you can acomodate the terms of some series to your liking.
Even trig functions like cos have always, 1-xx/2 as first terms? ]What do you do to change that? The ordering of terms? In any case, usual order.
No, for me a Taylor series has nothing to do with appproximation by polynomials, unless you cut it off, and say that is the approximation. So what? Fourier exapnsions are also stable as of first terms.
See a discussion by Arnold Sommerfeld, partial Differential Equations in Physics, in the Chapter he devotes to Fourier expansions
It is not clear to me what you are trying to say.
An example
sin(x+ey) = sin(x) cos(ey)+cos(x) sin (ey)
= sin(x) +ey cos(x)
The derivative of the sin is the cosine.
The thing is e will knock out all the higher terms.,whatever you want to do.
Dear Juan Weisz , how many roots have a quadratic equation (a*x^2 + b*x + c) in dual numbers?
I write in terms of z, not x but quadratic
If
(z-x)(z-x)=0
the case of degenerate roots,
I can write
z=x+ey or z=x+e(T)y
two possible solutions, where e(T) e(T) =0 and ee=0.
I think all other cases covered by real or complex.
y is arbitrary real, different from zero. This is similar to using i, -i, a pair of solutions.
However , in any case of complex roots I could use i=e-e(T) instead.
e e(T)+e(T) e =1 must be met.
The e´s are as i, abstract or matrices.
The principle of complete induction can be used in liu of the comparison
to Taylor series, where the first derivative appears early on in the expansion.
If the technique works say for f(x)=x we can assume it works for x to the power of n, and prove it works for a power of x to the n+1. Actually it works for a constant function also.
Or directly prove from the second term of a binomial expansion.
Then it will work for any finite power series, etc, and you do the same for negative powers, rational functions, etc.
Well knowing analysts they will still be suspicious....
Juan Weisz , unfortunately, algebraic equations in dual numbers are much more complex. For example, the quadratic equation (if you prefer z instead of x):
z^2 = 0
has infinity solutions. Any dual number z = ey, where y is any real number satisfies the equation:
ey*ey = e^2y^2 = 0y^2 = 0.
This is only one example of extra attention which should be paid with dual numbers. You can not expect them to behave similarly to complex numbers. The reasons were already pointed out: they do not form a field.
Alexandru
Yes, the example you gave is a special case of mine.
I like to keep x not zero for invertibility.
Yes, Im aware it is a ring, not field.
So what? As long as x is not zero, I can invert all elements, same as a field. In ordinary algebra you cannot divide by zero either.
These rings usually have straight lines as limitation, almost nothing compared to the full plane.
Regards, Juan
Another simple example of
F(x+ey) =F(x) +ey DF(x)
Let F(x) = exp(x)
Calculate now F(x+ey)= exp(x+ey) = exp(x) exp(ey)
= exp(x) (1+ey) = exp(x) +ey exp(x)
The derivative of exp(x) is exp(x)
Perhaps some can wonder how an "impovrished " algebra such as
sin(ex) = ex, cos(ex) =1, exp(ex)=1+ex, ln(1+ex)=ex does this job,
but it does work out.
Take it from Lagrange. and h......>he in Taylor.
To my knowledge, the key to distinguish field from ring, is that in a field you can invert all nonzero elements, but not in a ring. ( and maintain algebraic closure)
However there are rings with some invertible elements, that maintain such closure.
There is no problem with closure under addition and multiplication.(and is commutative)
In algebra
(x+h)^n =(x^n) +h n x^(n-1) +....
second term on rhs contains first derivative , Algebra, not magic. Lagrange knew this.
If someone sais that there are infinite different matrices that satisfy MM=0
then I could reply, for the complex case , that the same is true for
MM+I=0
Quadratic equations over matrices is open problem in math.
The ye buisness is all on a single vertical line x=0
y is arbitrary, somewhat similar to an h in Taylor theory.
please see my research paper 📷
Source
Factorization of a linear partial differential equation into 1st order and quadratic forms. it is purely upon the fact of fundamental theorem of algebra.
SRT
It is of course quite reasonable that using linear operators which commute,
as long as you dont mix say d/dx with x as in QM, that you get similar results
to the fundamental theorem of algebra.
Im now using to better understand the Lorenz transformation possible generalizations by using the Hyperbolic system of
x y
y x
matirces or x+ty , tt=1 isomorphic .(y=ct) . xx-cctt invariant.
Multiply by
gamma -gammma beta
-gamma beta gamma
determinant equal to 1 , but also any power of it also has det=1
Dont undestand why others say they use
conformal transformations Angle preserving Complex?
Any one?
John
Sure the mixed system C and Z is a ring, not a field.
Viewed as a matrix system you are of course in trouble, with infinite solutions
as stated by Alexandru. (for quadratic equations)
Even then you may single out
the solution you want, for z, the simplest a 1 in lower left corner, all the rest 0.
You can use the isomorphism in matrix form
x 0
y x
in liu of the dual algebra. (x+zy) Check it out.
The full solution over matrices for the cuadratic
equation is not known yet.
But just as a single number system, abstract z , the problem with multiple solutions
has not appeared problematic yet for me. An equivalent matrix for z has a 1 in upper right corner
calling it z(T) instead.
You are probably correct that there are whole new properties.
For C extended to matrices, similar problems emerge, i is not unique. -i is also valid.
The only problem with fundamental theorem of algebra, is that there may be other solutions
to those given.
Take the quadratic eq. axx+bx+c=0
Any time this has zero discriminant as in
axx+ bx + (bb/4a)=0
one finds the solution
x=-(b/2a) +ye
y any real number, where ee=0, the dual ring.
John
If you zwap twice you get an identity?
Properties of x+ey
Can be imagined in 2 ways
1. Covers the whole plane as C does
2. Just the x axis plus an infinitesimal