09 September 2015 1 6K Report

I am learning the "synthetic control method(SCM)" since I am doing comparative case study. I have questions about its reliability in optimization.

After ranking the similarity of every control unit with treatment unit, I choose different top N sample in this Sythetic Control (SCM). When I iterate the sample size from 2-280 in this method(Synth package in R), the objective function(MSE) is fluctuated very much. This unstable result make SCM unreliable. After placebo test, I delete the outliers, MSE got smaller and stable,but the optimized weight of control group and predictors are still fluctuate and non-converge as sample size increase.

My question is whether this non-convergence is because the initial value in the optimization due to genetic algorithm? Or because of potential collinear between predictors?

I have two solutions for your suggestions:

(1)I applied another similarity measure matrix --mahalanobis distance instead of Euclidean distance which consider the correlation between predictors. 

(2) Considering the potential non linear relationship, I used neural-network, Support vector machine, Gradient boosting regressiong tree for optimize the weight. But the first 2 methods turn out the overfitting and only GBRT works fine. Could I use this result and move forward? I am not sure this is the right track.

Similar questions and discussions