Ussually you split the data set in a training set a a testing set. After training your model using the training data set you should check for the accuracy of prediction using the test data set. If using the test data set result in a greater error than that resulting from the training data set it may be possible that the training process is wrong, for example you overfitted the training data set.
A well known technique used to asses the accuracy of a prediction model is cross-validation. You can find an introduction about cross-validation in "An Introduction to Statistical Learning wih Applications in R" by G. James, D. Witten, T. Hastie and R. Tibshirani which is freely avalilable here: http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf
It is a matter of experience. Two difficulties are if the training data need careful clean-up and if 1% or more of the training data is misclassified. In those situations, the experience from working with much easier situations can help you determine whether the data need more clean-up.
Firstly you need to be sure whether methods that you selected are good enough for building your targetted model. And besides parameters you need to select and tune, data together with training and testing methodologies are also a very important aspects. In order to help you with that I recomend you go through chapter "Cross-validation" by scikit-learn here - http://scikit-learn.org/stable/modules/cross_validation.html . It doesn't not directly address SVM or ANN, but I hope it will help you anyway. Good luck!
Ussually you split the data set in a training set a a testing set. After training your model using the training data set you should check for the accuracy of prediction using the test data set. If using the test data set result in a greater error than that resulting from the training data set it may be possible that the training process is wrong, for example you overfitted the training data set.
A well known technique used to asses the accuracy of a prediction model is cross-validation. You can find an introduction about cross-validation in "An Introduction to Statistical Learning wih Applications in R" by G. James, D. Witten, T. Hastie and R. Tibshirani which is freely avalilable here: http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf
As others above have mentioned, you might want to try cross validation, by splitting the training data into training and validation sets, so that you can basically try out a bunch of values for the parameters of your learning algorithm and check the accuracy of the learned model on the validation set. For example, in case of tuning the C parameter in SVM, people typically try values in the set (10^-4, 10^-3, 10^-2, 10^-1, 1, 10, 100, 1000, 10000). Running the SVM training using these different parameter values and testing on the validation set will give you a much better idea about which parameter value is doing better. And then you can run your training with that better value on the entire training set. In practice this kind of approach does a decent work of generalization for a better prediction accuracy on unseen data.
Now having mentioned about cross validation, I would like to mention about another awesome technique for hyper parameter tuning, which is called Bayesian Optimization. As an active researcher in this area, I feel very excited to talk about this approach. Here, the basic idea is to consider this hyper parameter tuning problem as an instance of a global optimization problem where you input space is a vector of values that corresponds to the parameters that you want to tune, the objective function that you are trying to optimize is the black box function which when provided the values of the parameters, runs the learning algorithm to get a model and returns its accuracy on the validation set. The objective is to find optimal accuracy from this function. Now what BO approach does is, it first assumes a prior belief on the underlying function that you are trying to optimize, (may be a Gaussian Process or a Mondrian Forest, etc), does exploration exploitation tradeoff to find out which input combination should be evaluated next, and use the evaluation results to update the prior belief on the function. This is basically an intelligent search of the space of values of parameters in consideration.
For a very nice introduction you can check out the following paper:
BO approaches have shown to do much better than many other existing approaches for doing hyper parameter tuning. A person interested in finding the optimal values for his / her learning algorithm's parameters should definitely check the BO approach.
The following is a very easy to use codebase for performing simple BO in Python:
Why do we split the data into a train, valid and test?
to can we verify that we trained our model correctly.
1/ train the model on train-data and val-data.
2/ test and evaluate the model on test-data.
for example, calculate and check the error, if the error is large, you should edit something, for example: add features to your data, edit parameters of your model(optimizers, learning rate or number of layers, nodes, etc). if the error is good (small) then the model is good.