Randomized learning of feedforward NNs was proposed as an alternative to gradient-based learning which is known to be time-consuming, sensitive to the initial parameter values and unable to cope with local minima of the loss function. In randomized learning, the parameters of the hidden nodes are selected randomly and stay fixed. Only the output weights are learned. This makes an optimization problem convex and allows us to solve it without tedious gradient descent backpropagation, using a standard linear least-squares method. This leads to very fast training. The main problem in randomized learning is how to select the random parameters to ensure the high performance of NN.

Your opinions, experiences, and solutions?

More Grzegorz Dudek's questions See All
Similar questions and discussions