Every one knows that MLP that used gradient descent suffer from having stuck at local minima and slow convergence rate. Many researhers proposed some modification on parameters such as learning rate, momentum etc. Lately researchers starts to introduce bio-inspired algorithm such ABC, ant colony, cuckoo etc to optimise the wieghts. But not many research done on the other parts which are improving search direction in gradient descent using 2nd order method such conjugate gradient, Newton etc. My question is "are there any other 2nd oder methods that can be use which can produce fast processing?"

Similar questions and discussions