I am designing a Neural Network with two hidden layers for a regression problem in Python. While all inputs are positive, there are supposed to be negative values in the output. The model was unable to predict the negative values when I used ReLU as the activation function, and the losses stagnated over the course of epochs when I used tanh as the activation function in all my layers. I didn't get any additional benefit from using Leaky ReLU or PReLU. Are there any other activation functions that can be used?

More Moon Bakaya Hazarika's questions See All
Similar questions and discussions