1) a training set which the nnet "sees" and trains on. As training progresses, the mean squared error (MSE) falls as shown by your graph. MSE=difference between your output from the training and the target squared and added up for each data point.
2) a validation set is data which the nnet is not trained on but the nnet uses its input variables to calculate estimates for the target values (ie the variable you are trying to predict). As the nnet trains on the training set, the MSE on the validation data also falls. The MSE on the validation data is practically always bigger than the MSE on the training set. The nnet is considered to be optimally trained when the MSE error on the validation set is the minimum (which occurs at epoch 15 in your graph). If you carry on training until the MSE on the training data set is the minimum, then the nnet will be overtrained and only useful for predicting results on the training set and not on other "unseen" data sets ie the nnet will not be able to generalise to other data sets. So when the validation MSE is the minimum your nnet is trained the best.
3) the test data set is also data that the nnet has not been trained on but it evaluates estimates for the target to ensure that the nnet is actually working on data that has neither been used to train the nnet nor been used to obtain the best training for the nnet.
So the training set is used to train the nnet, the validation set is used to get the best trained nnet and the test data is used to check that the nnet is going to be able to predict the target output when it is presented with completely unseen data sets.
Your bottom set of 4 graphs show that the nnet predicted output correlated highly with the actual target values for all data sets. This means that your nnet trained well and can generalise to other new data sets not yet "seen" by your nnet, which is what you want.