I am using LM algorithm to train RBF neural network, but it seems it does not converg to an optimal solution,(it works quit well for MLP networks). in LM training how we should choose the initial values of centers, weights and spreads?
I think a large number of trials should be done in order to find the optimum values of the design parameters. Based on my experience, RBFNN is not a powerful method for forecasting target in dynamic systems. I think MLP outperforms the RBF method.
if you are applying the Levenberg-Marquardt algorithm for radial basis function neural networks, the following reference may help to converge on a better solution. It incorporates an adaptive learning rate:
For training the artificial neural network the levenberg marquardt (LM) is the best, I try it many times to develop ANNs on different structure and LM algorithm is outperform the others predefined in matlab but i'm recommend you to train ANNs with some new algorithm like PSO, ant colony,.....
it was a useful paper helping to reduce the memory during computation of LM for RBF networks. but infact my problem is that when I use LM i think I stock in the local minima most of the time. is there a remedy for that?