In a binary classification, I used GA taking accuracy as the fitness function to obtain optimal weights for my features/attributes. Achieved an accuracy of 85%. Is it OK , can I publish this.
It totally depends on your data set, application, validation method and related works. Assuming that you are selecting a suitable method to evaluate your model (proper test option) and your model performed better than others (or no other result is available), you may need to consider some other criteria to evaluate your system like FP, TP, Precision, Recall and so on. I will explain with an example. Suppose that your system has 10% False Positive (or False Alarm Rate) in an Intrusion Detection system. There might be millions of connections which should be classified as an Attack or Normal. With this False Alarm Rate, you might have hundred thousands of Alarm that can be terrible. Therefore, you should check with your application whether the accuracy and the other metrics mentioned above is enough for the moment or not.
It totally depends on your data set, application, validation method and related works. Assuming that you are selecting a suitable method to evaluate your model (proper test option) and your model performed better than others (or no other result is available), you may need to consider some other criteria to evaluate your system like FP, TP, Precision, Recall and so on. I will explain with an example. Suppose that your system has 10% False Positive (or False Alarm Rate) in an Intrusion Detection system. There might be millions of connections which should be classified as an Attack or Normal. With this False Alarm Rate, you might have hundred thousands of Alarm that can be terrible. Therefore, you should check with your application whether the accuracy and the other metrics mentioned above is enough for the moment or not.
The accuracy achieved is not the main point (clear that higher accuracies are better), I think the most important issue is to compare your method with baselines and state-of-the-art results.
The classification accuracy depends on the threshold value you are using to classify. In each method this threshold value will be explicitly or implicitly used which will change the true positives, false positives etc. The accuracy depends on the dataset you are using. If you add new data to the existing dataset the accuracy will also change. By adding new data wisely you can show that your method provides better result. If you want to compare your results with others you should test your method with same dataset and environment used by them.
Most of the Classification algorithms available in the literature are experimented with standard datasets. For each dataset, there is a bench mark for accuracy with state-of-the art methods. You have to consider the bench marks for the each dataset and report further.
the application is the main factor to determine the acceptance of the result, and the training set is very important to increase the classification ratio.
Accuracy or error rate is one of the most common metrics in practice used to evaluate the generalization ability of classifiers, as it is easy to compute with less complexity. However, using the accuracy as a benchmark measurement has a number of limitations.
For Simplicity ,
accuracy could lead to the sub-optimal solutions especially when dealing with imbalanced class distribution.
Furthermore, the accuracy also exhibits poor discriminating values to discriminate better solution in order to build an optimized classifier.
Also, it is powerless in terms of informativeness and less favor towards minority class instances.
Instead of accuracy, the F-measure and Geometric-mean (GM) also reported as a good discriminator and performed better than accuracy in optimizing classifier, especially for binary classification problems.