Dear Researchers,

I'm just wondering why we should ALWAYS use the learning rate on neural network between [0 1]? What's the benefit of using the learning rate between that domain?

Can we extend the domain for getting bigger rate? For example [0 4]

I've read about it but I'm not convinced so I would like to extend my understanding about this matter.

Regards,

Amirmohammad

Similar questions and discussions