If classifier perform poor (it is called weak classifier, and usually performance is around 50% or little above), then Boosting methods may be a good choice classification purpose.
Consider also analyzing your features first. There may be some redundant information available in the feature set, so you may want to focus on emphasizing powerful features and eliminate weak features.. For this purpose, mRMR or similar feature selection and analysis tools can be used before conducting classification.
SVM is a strong alternative. It is a very strong classifier, but parameter tuning is difficult when the data is huge.
Hello, is this dataset unbalanced?. If this is the case, you can use K-Fold crossvalidation, maybe a 5-Fold, using groups of 12, 12, 12, 12 and 14, to prevent losing any samples, since your dataset is small. You can also use Leave One Out validation, since your database is small. In this case, you can use 61 samples as training and 1 sample as a test, iterating the test samples until completing the 62 samples, it is the same doing 62-Fold for your case. Regards.
I think Pablo Eduardo Espinoza Lara has the best answer here so far. K-Fold is typically your best bet, and if needed Leave One Out Validation. However, I would recommend using the simplest model you can, I would suggest decision trees or Logistic Regression. Finally, is there anything you can do to "cheat" with your data? Can you synthesize new samples or add some noise? What type of data is it? Those questions are going to be much more important than model choice or sampling strategy.
With just 62 samples, a leave-one-out strategy is recommendable, so as to exploit the maximum number of samples to train/fit your model. If you can afford to keep more of your samples out, as it could be the case in a space with very low dimensionality, you could perform a 5-fold cross-validation. In any case, as correctly suggested by Christian Randolph Lynn , keep your model as simple as you can: the number of parameters should always be smaller than the number of samples, in order to avoid overfitting.
When you data set is very short, you can using k-fold technique of cross validation. The number of k depends of your data set, you can try with different number of k, some authors recommend 10 k fold to begin