Suppose there are 10 factors which influence the output of the neural network. how can we identify which of these factors have the most effect on the output? Means that changing these factors have more effect on the output?
You can always remove different factors from the input, then train and test the neural network. Removing the most significant features will result in the biggest decline in classification accuracy. Of course, this method is not precise because removing inputs will change the NN architecture, and thus its properties.
It assumes knowledge extraction and quantifying of the relative importance of the NN inputs based on the synaptic weights values as well as graphic representation of it called sensitivity analysis.
Article Illuminating the "black box": A randomization approach for u...
Like the others suggest, you need to examine what happens when that input is not present. The network you build is a complex non-linear multidimensional space in which each input plays a varying role. I would not rebuild and retrain but rather examine, in a sensitivity analysis, what happens to your output when each input is systematically eliminated. You may find that the input has no effect, a positive effect, or an inhibitory one.
I would suggest using ID3 for data classification to help you figure out the set of the most important/influential features. You can then train your network with and without this set of features to see how the classification accuracy has been affected.
Olden's technique as discussed by Marcin Nowak above has been shown to be statistically superior in determining variable importance. It takes into account the influence of negative coefficients rather than use absolute value, such as Garson. Olden is included in NeuralNetTools in R.
It can be done by sensitivity analysis. You may test new dataset with simulated data base with the same descriptive statistics as of your original data. The predicted outputs can be used for sensitivity analysis. You may access exactly by following my incoming article "Durability evaluation of GFRP rebars in alkaline concrete environment using optimized tree-based random forest model"
There is a time-consuming but effective method "sequential addition and removal of input data elements". 1. Build a model on all elements of the input data (10); 2. Train and test this model on 10 elements (record accuracy); 3. Train and test on 9 elements (record accuracy); 4.And so on with the removal of one element. Plot the accuracy graphs depending on the input data elements. Theoretically, through the correlation matrix, it is also possible to assume the strength of the relationship between the elements of the input data and the target variables.