What are the standard parameter values of the commonly used classifiers such as Support-vector machine, k-nearest neighbors, Decision tree, Random forest?
Any classification method uses a set of features or parameters to characterize each object, where these features should be relevant to the task at hand. We consider here methods for supervised classification, meaning that a human expert both has determined into what classes an object may be categorized and also has provided a set of sample objects with known classes. This set of known objects is called the training set because it is used by the classification programs to learn how to classify objects. There are two phases to constructing a classifier. In the training phase, the training set is used to decide how the parameters ought to be weighted and combined in order to separate the various classes of objects. In the application phase, the weights determined in the training set are applied to a set of objects that do not have known classes in order to determine what their classes are likely to be.
If a problem has only a few (two or three) important parameters, then classification is usually an easy problem. For example, with two parameters one can often simply make a scatter-plot of the feature values and can determine graphically how to divide the plane into homogeneous regions where the objects are of the same classes. The classification problem becomes very hard, though, when there are many parameters to consider. Not only is the resulting high-dimensional space difficult to visualize, but there are so many different combinations of parameters that techniques based on exhaustive searches of the parameter space rapidly become computationally infeasible. Practical methods for classification always involve a heuristic approach intended to find a ``good-enough'' solution to the optimization problem.
It really depends on what you are trying to achieve. Every algorithm has its own default parameters. This is because the approach and methodology varies from concept to concept. So also the optimization methods and parameters.
Similarly, different performance metrics are used to evaluate the performance of different machine learning algorithms (MLA), however most of these metrics are generic and standard in nature.
This approach enable performance comparison of different MLA in the machine learning community.
Hope this answers your question, otherwise kindly get back to me.