03 October 2018 8 371 Report

I would like to ask a question from Neural nets community, that there exist many activation functions such as Sigmoid, Tanh, ReLu etc., to fire a neuron. To insure non-linearity and better update weights in neural nets layers, which between Tanh and ReLu functions perform better in text classification tasks and why.?

More Zafar Ali's questions See All
Similar questions and discussions