The covariance matrix is perhaps one of the most resourceful components of a bivariate Gaussian distribution. Each element of the covariance matrix defines the covariance between each subsequent pair of random variables. The covariance between two random variables X1 and X2 is mathematically defined as Sigma(X1, X2) = E[(X1-E[X1]) E[(X2-E[X2])] where E(X) denotes the expected value of a given random variable X. Intuitively speaking, by observing the diagonal elements of the covariance matrix we can easily imagine the contour drawn out by the two Gaussian random variables in 2D. Here’s how:
The values present in the right diagonal represent the joint covariance between two components of the corresponding random variables. If the value is +ve, that means there is positive covariance between the two random variables which means that if we go in a direction where x1 increases then x2 will increase in that direction also and vice versa. Similarly, if the value is negative that means x2 will decrease in the direction of an increase in x1.