I classify a set of images using neural neural network. I am training a simple neural network with 4 inputs (image features), 1 output and 5 hidden layer with 10 nodes in it. My weight matrices are a set of numbers between -1 and 1, but I can not get a physical meaning about these weights.
Are the weights in accordance with the importance of the inputs (image features) in the model? Shouldn't I get higher weights for the inputs which are more correlated with the output? How can I get a physical meaning about the resulted weights?