I am running logit regression in which , no. of labour variable is correlated to no. of buffaloes, milk average variable. whereas age is correlated to experience of farming.
One way is to evaluate the results underlying these correlations. Another way to omit (for example age and experience are higly correlated, you can use of of them). In some cases it is possible to consider two variable as one.
If they are correlated, they are correlated. That is a simple fact. You can't "remove" a correlation. That's like saying your data analytic plan will remove the relationship between sunrise and the lightening of the sky.
I think your problem is that you are using predictors that are highly correlated with one another. Is that correct? If so, then you have a couple of options. One is to choose one variable from each highly correlated pair. For example, age OR experience. Choose based on which one is more logically connected to what you're trying to predict, or else go with the one that correlates most strongly with the outcome variable. Just be sure to explain, in your paper, how you chose it. The other option is to create a new variable by combining them. If you go this route, I'd suggest converting the two variables onto a similar scale (maybe a z score), then summing them. This may give you a more valid measure of the underlying construct. For example, age and experience may both be related to judgment.
You can name it whatever you wish. Ideally, the name "makes sense" given the nature of the variables you are combining. But you can also just call it a composite of the two (or more) variables you're combining. For example, you could call one the "age/experience composite," or try to synthesize their meanings (as I did when I suggested that they both reflect more informed judgment).
If I understand your problem, then perhaps you should calculate the principal components that are uncorrelated variables. A bit on this topic: https://www.researchgate.net/publication/319469038_New_Interpretation_of_Principal_Components_Analysis.
Article New Interpretation of Principal Components Analysis
@Prof. Dr. Erkan Rehber. With due respect, it hasn't been asked again. The questions once asked are always open to be answered. It depends on the reader when to answer. Thank you very much for your answers, dear Prof.
To "remove correlation" between variables with respect to each other while maintaining the marginal distribution with respect to a third variable, randomly shuffle the vectors for fixed values of the third variable...this is described at length in the following papers:
Delis I, Berret B, Pozzo T and Panzeri S (2013) A methodology for assessing the effect of correlations among muscle synergy activations on task-discriminating information. Front. Comput. Neurosci. 7:54. doi: 10.3389/fncom.2013.00054
Ince, R. A., Senatore, R., Arabzadeh, E., Montani, F., Diamond, M. E., and Panzeri, S. (2010). Information-theoretic methods for studying population codes. Neural Netw. 23, 713–727.
Panzeri, S., Montani, F., Magri, C., and Petersen, R. S. (2010). Population coding. Spring. Ser. Comput. Neurosci. 7, 303–319.