They are called independent variables (IVs) because they explain or predict a dependent variable. Once the IVs are strongly correlated among themselves you need to transform the strongly correlated IVs or change their measurements. If that doesn't work you need to be cautious as multicolinearity violates OLS's assumptions. You may refer to Gujarati (2004), Hair et al. (2014 ) and Pallant (2011)
In regression, if independent variables are strongly correlated with each other, in that case, we should avoid using them in the same model. Independent variables are called "independent" as these variables are used to predict the value of the "dependent" variable. It is with context, the term is used. REMEMBER the context.. :)
They are called independent variables (IVs) because they explain or predict a dependent variable. Once the IVs are strongly correlated among themselves you need to transform the strongly correlated IVs or change their measurements. If that doesn't work you need to be cautious as multicolinearity violates OLS's assumptions. You may refer to Gujarati (2004), Hair et al. (2014 ) and Pallant (2011)
An independent variable is a variable that stands alone and isn't affected by other variables we are trying to measure. In research. very often a researcher manipulates an independent variable to measure the way it influences targeted dependent variables. In a regressional analysis, however, the researcher wants to predict an independent variable from another independent variable. These variables themselves are highly correlated.
An independent variable is defined as the variable that is changed or controlled in a scientific experiment. It represents the cause or reason for an outcome. In other words, Independent variables are the variables that the experimenter changes to test their dependent variable.
A Regression model can be viewed as an input-output model, where the output (dependent variable) is modeled as a functional form of dependence on the input variables, so you can call the independent variables the input variables also.
the question is how independently these input variables can be varied with respect to one another, in designed experiments we set the input factor levels as desired but in case of field data based situations, it is not always possible to set the input variables at precise desired levels, and depending on the nature of the system that is generating the data, there will exist relationships among these variables on which data collection has been done, so the subset of these variables which one may consider as the input variables set may have interrelationships among themselves, this situation, as Prof. Roy has already pointed out, is termed as the Multi-collinearity which results in large prediction variances at one or more experimental space points, there are several approaches to deal with multicollinearity, such as Biased regression methods which purposefully generated a biased model , but with a well conditioned design matrix which results in biased estimates, but with lower prediction variances.