Each time the system runs different number of epochs(no. of iterations ) to attain a desirable result which at times introduces more error . I had tried it with neural network
Sample datasets are becoming increasingly large. A bottleneck
problem is being faced in terms of hardware capabilities of the system. Moreover, the training time for the algorithm becomes very lengthy. As a result, the efficiency of the system decreases significantly.
As increasing number of samples to train neural network, the efficiency of system reduces, because weight and bias initialization matrix for NN random. You can resolve these problems by taking an optimize algorithm for initialization of weight and bias.