SVM is not very efficient for multiclass problems from a computational perspective because you essentially train a set of binary classifiers. KNN and ANN do not have such computational issues.
That said, the accuracy of SVM may be higher for your particular data. Which method gives the best accuracy depends entirely on the data (e.g. which method happens to fit the underlying structure best).
SVM is not very efficient for multiclass problems from a computational perspective because you essentially train a set of binary classifiers. KNN and ANN do not have such computational issues.
That said, the accuracy of SVM may be higher for your particular data. Which method gives the best accuracy depends entirely on the data (e.g. which method happens to fit the underlying structure best).
Your results make sense to me. I mean it is quite possible that a RBF-SVM outperforms ANN and KNN in an experiment. I also agree with Marc that the results are usually data-dependent.
For efficiency, I think an ANN can be slower than a RBF-SVM, depending on the structure of your ANN. 30 or 55 classes are not very many to me, but if you have more than 100 classes, then yes, SVM can be very slow especially when you have a large number of training samples.
I would like to add some spice to the above answers.
First, as said, these kind of issues are data dependent. For which, we researchers don't have any choice to change the data. so no need to worry with this part.
Just refer NO FREE LUNCH THEOREM, i.e. no algorithms are perfect.
Second, You can use multi-class SVM, along with this use optimization technique too. That will make it perform much better.
Third, for multi class problem like this, apart from ANN, SVM-RBF, KNN, there are better algorithms called ensembles. You can use random subspace methods. I hope it may give better results.
As the others have already said, there is no "free lunch" and the algorithms' performance depends on the data. Considering the high number of classes, I suggest also to consider an approach in which you divide them into groups (according, e.g., to some criteria that you can infer from your data), build a classifier for each partition and then combine the output of such classifiers. If this strategy sounds interesting to you, I can provide more information. Bests.
In my view, two kinds of ensemble methods should be preferred in presence of several classes: random/rotation forests and ECOC. The former approach uses decison trees, which are naturally suited for dealing with multiclass classification. As for the latter, I would personally consider ECOC as the most relevant ensemble method in the event that the number of classes goes beyond --say-- one hundred. With ECOC, specific care should be taken in adopting/devising the actual algorithm used to select binary classifiers. Feature selection is also a relevant issue for ECOC base classifiers in most cases.