The idea is for two or more variables to be significantly related, there must be similarity in their gradients. Then, if lines produced by the observations are parallel, multicollinearity can be confirmed, otherwise, the variables are not collinear.
Multicollinearity can be assed with two tools: Variance Inflation Factors (e.g., vif() in the Car package for R) and bootstrap confidence intervals. VIFs higher than 5 are problematic, anything above 2.5 should trigger your curiosity. Models with high levels of multicollinearity will also tend to generalize very poorly, resulting in wide confidence intervals.
In Stata, collinearity is routinely reported and the offending variable automatically dropped when running estimation commands. I assume this is the case is most stats programs. A cool command within Stata can help you find the collinear terms using _rmdcoll. For example:
sysuse auto //get data set
generate tt = turn + trunk //create collinear term
regress price turn trunk tt //run regression with collinear terms
output will say: "note: turn omitted because of collinearity"
_rmdcoll turn trunk tt // identify independent variables to be omitted because of collinearity
The simplest way to detect collinearity is just to look for correlation among independent variables. A gradient may not be enough. If you have three independent variables x, y and z and one dependent variable v, and if y = 2x and z = 3x, in a regression model v = ay + bz + e, variables y and z will be perfectly collinear, whereas their gradients a and b will be unequal.