1- Try different activation functions in both hidden and output layers (may be you could use softmax or sigmoid in the hidden layer and linear function in the output layer) .
2-Try to have a stable data set to training your prediction model, this simply can be done by cross validation technique to reduce the variation.
3-Reduce the noise of the data.
4-Tune your parameters such as hidden neurons in the hidden layer, learning rate, momentum.
5-Try different types of updating the weight matrix for your neural network model , may you can try to use swarm intelligent techniques to update your weights.
This is a very good question. we must also ask what sort of task do we do? is it a prediction task or is it a classification task, since in the prediction task we need to forecast one output which is a numerical value, thus, it might be a bit difficult to do comparison in this case, however, in classification we could have multiple classes, thus, it is easy to set a threshold to compare our outputs with the target class, but this has to be between every two classes.
1- Try different activation functions in both hidden and output layers (may be you could use softmax or sigmoid in the hidden layer and linear function in the output layer) .
2-Try to have a stable data set to training your prediction model, this simply can be done by cross validation technique to reduce the variation.
3-Reduce the noise of the data.
4-Tune your parameters such as hidden neurons in the hidden layer, learning rate, momentum.
5-Try different types of updating the weight matrix for your neural network model , may you can try to use swarm intelligent techniques to update your weights.
Adding to Tarik A. Rashid ·4 - you can try prunning techniques - i.e. building excessively large network (with large amount of neurons) and then pruning neurons.
Just bear in mind one of the main rules: Garbage in - garbage out. Make sure data is meaningfull and preprocessed (cleaned up, scaled if needed, properly encoded).
Unfortunately in ANN training step you should do everything in try and error method. Using a repeatable loops you must change transfer function, training algorithm, neuron numbers in hidden layer to build different configuration and extract results. You may repeat a certain model over 1000 times to access your desired results. ANFIS and SVM are other choices that you can use them.
In my opinion, to solve your problem, you can use additional control of the excitation threshold of neurons in the network, depending on the input data.
In addition to the above answers, I think you didn't get the global minimum in the training process itself. Try variable step size techniques and optimize the network size.
one of my penchant on ANN training is, to train using 2 set of data, training (T) and target / objective (O), and using accurateness to target as training trigger, something like,
01. Train to T for 1 iteration
02. Test to O, if O' Deviation to Intended Value > O, then repeat 01.
Thank you for your interesting question and thanks to the other researchers who are participated in answering.
There are many ways to improve the performance of the ANN that could be classified in measurements applied in different part as: database used, ANN algorithms, ways to validate the model, etc.
The advices proposed before are very interesting, I may add another way focused on the selection of the most important independent variables inside the database used. This selection have to be done before starting the ANN design. An example of this was presented in the manuscript attached, nevertheless in the next months we will publish a set of ways to make the aforementioned selection in the ANN models.
I hope what I am writing and the paper attached help you. If you have any questions, please contact me.