I recently compared performances (on a classification problem) on several data sets with plsda / lda / SVMs / Boosted and Bagging trees .. and results are suggesting that only in a case NN is top performing
The No free lunch theorem suggests that there can be no single method which performs best on all data sets. And it is natural that ANN will perform better in some, SVM in others and LDA/... in still others.
And practitioners say there's no way to find which method will perform better in which kind of data set. Here's one particular link you might find interesting...
The No free lunch theorem suggests that there can be no single method which performs best on all data sets. And it is natural that ANN will perform better in some, SVM in others and LDA/... in still others.
And practitioners say there's no way to find which method will perform better in which kind of data set. Here's one particular link you might find interesting...
This is semantics about what one considers neural networks, since SVMs, to name just one example, are considered neural networks. So the distinction is between associative memory models and those that don't implement a learning strategy.
Regarding now neural networks, it's surprising that, after so many years, it's still not recognized that all neural network models have finite capacity-that the interesting issue is, that, with a given training set, there do exist associations that *can't* be learned and that modifications to the learning strategy are required, for instance the ``tiling algorithm'' of Mézard and Nadal, to name but one, for perceptrons. Similar modifications can be realized for attractor neural networks, too and have been known for about as long a time.
My expertice says that the answer is to define a variance function in order to estimate right parameters of the network. In fact it is the advantage of SVM. A SVM is a perceptron working directly over the variance function given by data. From this view, a ANN could be more powerful given that such a variance function is approximated (not optimized like in SVMS) by several perceptrons allowing any desired variance function. So when we attain more precise descriptions about these approximation mechanisms, ANNs perhaps will not have any disadvantages (even any competitors) because we can manipulate entirely the classifier, allowing different and controlled variance functions into the same learning machine. It would be great if some one else give us an opinion about this.
I found that the ANN's performance strongly depend on the dataset: its level of noise and richness. I could not say that they are always top performing. Luiza
O agree with the previous comments: NN are very good or good for some probelms, bad for others, and very bad for some. It depends mainly on the generalization capabilies. If the datasets are not "in the same shape", the NN will perform poorly in new data.
i agree to Mr Mahmoud Omid. we can not say that a certain clasifier is the best in every situation. you have to trust the classifier and test it. it;'s a hit n trial process.
It depends on the data and its presentation. It is always possible to find such a collection that this method was the best. Remeber that ANN gives output which changes in every run.
I agree with Mr Partha Dey, as the problem would define partially the model you should use. However, if enough data is available, and the quality of such data, in terms of deviation and error, is good enough, my experience with ANN is thet they offer accurate and reliable estimations when the model is optimized correctly.
But, as it has been said before, each tool has its own range of aplicability, and sometimes ANNs cannot be applied.
To add little spice to the above answers, I shall comment here.
See the field of data mining research is DATA oriented. it means that when you change the data, the methods also need to be tuned to the need.
In other words, for different engineering problems, the data is different so the methods will also be different. Therefore , there is no ideal algorithm that will give best results in all engineering problems.
I got the 10th place (8th on public leaderboard) and for each of 7 datasets I put in competition nearly 170 models with 4 resampling methods and there's no NN in my final baseline.
I also agree with Mr Partha Dey. There is no way to establish what is the best learning technique. The only way is a benchmark of several techniques on your specific problem in order to choose the one that performs better.