If _1 and _2 are two scalar products arbitrarily chosen on a finite dimensional vector space, then there exists an automorphism P of this vector space such that _2 = _1.
Is this theorem (which I think I have invented) known?
The problem has a strict solution with the use of the well-known fact: that every separable Hilbert space possesses an orthonormal basis. Thus the two linear spaces are of the same dimention and therefore they isometric. The isomorphism can be established e.g. by the (unique) linear function P which maps the element nr j of the first basis onto the element nr j of the second basis, for every j .
Please,explain: what is the convex cone of symmetric positive definite matrices for?
Besides, If P is such an appropriate isomorphism, then -P is approprite too. If this property had every element of a cone containing (weakly) positive matrices only, then the cone would consist of operator 0, which cannot preserve any scalar product.
It is true that this representation theorem can be seen as a consequence of the Cholesky decomposition theorem.
I want to insist on the fact that when classical physics states that space is Euclidean and of dimension three, this euclidean geometry, which should be proved by measurements of lengths and angles, must be qualified as proper geometry. Because from a mathematical point of view, several euclidean geometries can be defined on the same affine space and they are connected by the proposed theorem.
One could also define non-euclidean geometries on the same space (I mean that to define a geometry on a space, it is enough to assign by definition a measure to each segment of curve parameterized that exists on this space, and to state that a straight line between two points is simply a segment of curve with minimum length that connects them).
Do I understand you correctly, that you want to state that given distances we have the geometry defined uniquely? One has to agree with you, despite the fact that the strict definition of the geometry is not recalled here. BUT, how is this related to your original question? Can you explain this, please?
I wanted to point out that a person might consider that Cholesky's decomposition is a stronger version of the same theorem, and ask the interest of this weaker utterance.
I then wished to point out that this formulation can make it possible to look differently at the formulas of kinematics. Indeed, it allows us to interpret Lorentz's transformation of relativistic kinematics in a different way (compared to Minkowski's four-dimensional formulation), and its generalization that I have proposed and which calls into question the formulation of the theory of general relativity .
I confess that my question is a pretext to promote a new kinematics of which I made the presentation in a the introductory comment of the following project: https://www.researchgate.net/project/Observational-reference-frames-and-electromagnetism
For someone who is mainly concerned with quantum mechanics and, hence, with complex vector spaces it is surprising how often one finds scientists for whom vector space and real vector space mean the same thing. Even software libraries dealing with linear algebra often don't make explicit that they ignore complex matrices. For scalar products in complex vector spaces one has to be aware that these are not bilinear forms but sesquilinear ones.
It is surprising how often mathematicians jump to complex numbers even where they are not at all necessary. Mathematics with real number is still mathematics, and may be more than mere arithmetics.
Your "theorem" is certainly wrong. You don't define precisely what an inner product is, but I suppose that you mean, as usual, a non degenerate bilinear form on a (finite dimensional) vector space V over a field K, or the associated quadratic form . Then here is an obvious counter-example : over the reals R , consider the 2 inner products 1 = x1 y1 +...+xn yn (classical Euclidian inner product) and 2 = x1 y1 +...+ xn -1 yn -1 - xn yn (Lorentz inner product in the Restricted Relativity theory). It is obvious that the Lorentz quadratic form admits non null isotropic vectors X, i.e. s.t. 1 = 0, wheras the Euclidian one admits none, so there cannot exist any automorphism P s.t. 2 = 1 . More generally, a quadratic form on a real vector space is characterized (with obvious notation) by its signature (+, +,..., - ,..., -) : this is a theorem of Sylvester. Of course your result holds true over an algebraic closed field (no need to go to C), but this is a trivial consequence of the "diagonalization" of a quadratic form. The classification of inner products by invariants (not necessarily a single one) depends of course on the base field K, and accordingly cannot have a general solution. Such a classification has been achieved over number fields, i.e. field extensions of finite degree of the rationals Q , but this is (hard) algebraic number theory.
By "scalar product"; I mean a positive definite symmetric bilinear form on a real vector space (I want to use the result for a description of classical kinematics, and for a three-dimensional approach to the kinematics of the theory of special relativity).
Thus, your counter example which is based on the use of a non-zero isotropic vector shows one of the limits of the mathematical generalization of a result suggested by the analysis of an intuitive description of the real world.
As you have pointed out, the result is trivial if we only consider, as I propose, quadratic forms with positive definite matrices; and I think what is really new is the use that can be made of it to interpret Lorentz's transformation differently in a theory of relativistic kinematics in which we give up the use of Minkowski's formalism.