I have two feature sets I evaluated using SVM on a two-class problem. The first set (the "good" one) provides acceptable results, while the second one is far behind. When I merge the two sets using early fusion, SVM provides me with a result that is somewhere in the middle.

I use the RBF kernel, optimal gamma is estimated using grid search on the test set. The optimal gamma values were found to be different for the three feature sets.

I am curious, what kind of classifier is able to get the maximal benefit from using more informative features? Is it possible to get the same accuracy for the combined set as for the "good" one?

Similar questions and discussions