In one of my publications, I have mentioned that the learning rate balances the intensity of downing the error after each epoch. The learning rate affects a larger or smaller portion of the respective adjustment to the preceding weight. If the factor is set to a large value, then the neural network may learn more quickly, however, if there is a large variableness in the input set then the network may not learn very well or at all. In real terms, setting the learning rate to a large value is improper and counter productive to learning. Characteristically, it is better to set the factor to a small value and edge it upward if the learning rate appears slow. Momentum runs as a low-pass filter to settle sudden changes in the progress. Momentum basically permits a change to the weights to persist for a number of adjustment cycles. The degree of the persistence is controlled by the momentum factor. If the momentum factor is set to nonzero value, then increasingly greater persistence of former adjustments is permitted in modifying the current modification. This can progress the learning rate in some situations, by helping to smooth out atypical conditions in the training set. The Question is that what are new methods to adjust these parameters to achieve the optimal solution?

More Hamid Taghavifar's questions See All
Similar questions and discussions