Are you asking about problems of multicollinearity, i.e. when two predictors are (nearly) perfectly correlated? In case of perfect correlation, the determinant of the matrix will be zero and you cannot invert it (it's like dividing by zero). Computationally, this may also be a problem, if it is near perfect collinearity.
Rainer Duesing thanks for quick reply. In my case, I have three input variable and one output variable. Input variables are independent of each other and I can do forward model for regression. Let's say I have A,B,C as my independent inputs and Y as my output. I can fit my forward model with multiple linear regression with 99% R^2.
Now, I want to go inverse. I want to predict B from A,C and Y as input parameters. However, my A and Y and C and Y have interaction now, which was not in my forward model.
Can you show your models and explain the variables? I find it strange to have a R^2 of .99 (at least in social sciences, psychology and biology...)
Why would you want to predict predictors also from your dependent variables? I cannot find a good reason why this would make sense, but again, this may depend on the research field.
How do you came up with the interaction? Dont you have any theory behind the association of the variables?
As you can see, I am quite confused, it would be very helpful to get more information.