12 December 2014 4 5K Report

Usually, the characteristics of the output signals are treated as derivative values from the characteristics of the input signals in the most models of neuron. But what is the meaning and value of such derivative information? Already on the second step of such information processing its meaning is not obvious and is completely lost on the next steps. The value of all the internal network signals is reduced only to the formation of paths of input signals to a specific output. In this regard, there is a representation of an artificial neural network as a kind of "black box", where the semantic interpretation can be given only its input and output signals. 

Similar questions and discussions