I want to use Random forest classification for classify RISAT-1 (SAR data), I am not able to understand that RF classification algorithm is better than SVM and ANN.
There is no way of saying that RF "is better than" SVM or ANN, because it depends on the dataset at hand and on many other factors.
Generally speaking, RF can be simpler to tune than SVM or ANN. It can be faster than SVM (depend on the amount of data and on the goodness of the SVM solver). It works nicely with categorical inputs (in SVM and ANN, you need to convert them in numerical form). It is an ensemble method, thus it may work better in some situations [1] (although this is not always true, e.g. [2]). SVM can work very easily with "strange" distance measures (e.g. kernels on images etc.). ANN can be made deep and work directly with a huge amount of features [3]. These are the main points, I believe.
[1] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
[2] Statnikov, A., Wang, L., & Aliferis, C. F. (2008). A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC bioinformatics, 9(1), 319.
[3] Arel, I., Rose, D. C., & Karnowski, T. P. (2010). Deep machine learning-a new frontier in artificial intelligence research [research frontier]. Computational Intelligence Magazine, IEEE, 5(4), 13-18.
There is no way of saying that RF "is better than" SVM or ANN, because it depends on the dataset at hand and on many other factors.
Generally speaking, RF can be simpler to tune than SVM or ANN. It can be faster than SVM (depend on the amount of data and on the goodness of the SVM solver). It works nicely with categorical inputs (in SVM and ANN, you need to convert them in numerical form). It is an ensemble method, thus it may work better in some situations [1] (although this is not always true, e.g. [2]). SVM can work very easily with "strange" distance measures (e.g. kernels on images etc.). ANN can be made deep and work directly with a huge amount of features [3]. These are the main points, I believe.
[1] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
[2] Statnikov, A., Wang, L., & Aliferis, C. F. (2008). A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC bioinformatics, 9(1), 319.
[3] Arel, I., Rose, D. C., & Karnowski, T. P. (2010). Deep machine learning-a new frontier in artificial intelligence research [research frontier]. Computational Intelligence Magazine, IEEE, 5(4), 13-18.
As others stated the pefformance of machine learning methods is data dependent. Some features of RF such as averaging over trees and randomization used in growing
a tree, enables it to approximate rich classes of functions while maintaining
Just to add some information, decision trees partition the training set into small subsets until subsets are small or class uniform. Such local approach to learning can be very effective when data is characterized by multiple clusters dispersed over the feature space (there is of course a risk of over-partitioning leading to over-fitting). Random Forests combine multiple decision trees, in that way minimizing the variance that comes with single complex trees. In short, RF exhibit multiple properties that are very effective during learning.