In my data set, i got a Training Accuracy = 95 , validation Accuracy=79 and Testing Accuracy=91.Is this result valid?also i want to know that my model is over fitted or not according to results?
This might happen, and from a single experiment it is hard to determine how meaningful are the results. The difference between validation and test is relatively high, which might indicate that the distribution of the data is different in both sets. There are some things that you can try to check the validity of your results:
- Repeat the experiment "n" times (e.g. 30). In every experiment make a random split of the data into training, validation and test sets. Then report the distribution of the results in the test set. Maybe you were just lucky in the test set, or unlucky in the validation set. Repeating the experiment in different splits of the data might give an intuition of what is happening.
- Try an approach as k-fold cross validation, then report the average and standard deviation. If you are trying different parameters in your network you will need to do nested cross-validation. This is just a variation of the previous approach.
- Is the distribution of your classes the same in the three sets? For example, if you have a binary classification problem and in the training set 60% are class 0 and 40% class 1, then you should try to keep this ratio in the validation and test set.
- Finally, just check that you are not accidentally including data from the training set in the test set.
These are just some ideas that you can use to check why you have a big discrepancy between validation and test set.
There is large difference between validation and testing accuracy it might be due randmoly taken data for training validation and testing .Use standard method 10 fold cross validation method. check one more think that how much epoch taken to train the model. there might over fitting. if model is not fully trained.