I have a 3 layer MLP architecture with which I want to approximate models for certain logics. I know that these models can be approximated using only positive weights and this property of the network would be very advantageous for rule extraction.

Unfortunately, when limited to positive weights, the standard Backpropagation algorithm tends to get stuck trying to reach local minima with partially negative weights.

So I am wondering whether there are any learning algorithms or variants for this special case which I have missed in my research so far.

Any advice would be greaty appreciated.

More Frederik Harder's questions See All
Similar questions and discussions