Is the above method of splitting the data set into taring and testing, and then apply k fold cross validation for training, and then the trained model to classify the test set is right?
In k-fold cross-validation, the entire dataset is randomly partitioned into k equal sized partitions. Of the k partitions, a single partitions is retained as the validation data for testing the model, and the remaining k − 1 partitions are used as training data. The cross-validation process is then repeated k times, with each of the k partitions used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method is that all observations are used for both training and validation, and each observation is used for validation exactly once.
Ramya Jothiraj however, i know there are four possible options are there for classification, those are "use training set", "supplied test set", "cross-validation" and, "percentage split". may be by improvising algorithm or by own algorithm creating own model may be possible to split the dataset.i have to check that.