There are many papers pure and applied on this topic. I guess you are looking for applications.
You can
1. measure the changes of neural network output with respect to the perturbation of single input variables of the mean object for each class of training.
2. measure the average sensitivity of the class objects.
3. or Follow
M. Gevrey, I. Dimopoulos, and S. Lek, “Review and comparison
of methods to study the contribution of variables in artificial
neural network models,” Ecological Modelling, vol. 160, no. 3, pp.
249–264, 2003.
If you are interested in decision making applications, then see for example:
There is also the more fundamental question of whether there in fact is a correct sensitivity formula at all. If the model represented by the ANN provides not only a unique solution for every given input "near" a base case, but also one that is stable enough to allow a kind of derivative to exist in the direction of parameter movement at that base case, THEN one can discuss (and perhaps also calculate) the sensitivity analysis discussed in the question. Whether that is the case depends crucially on the properties of the original model, of course.
You will find in the paper below a sensitivity measure based on "What If ?" simulations.
In short, take the sum over a of the expected value w.r.t. p(x_i) of the difference (f_w(x_i+a)-f_w(x_i))., f_w denotes a function, for instance a MLP with weights w.
I have formulated a dependancy matrix, which allows you to calculate the sensitivity between dependant and independant veriales in a multilayer perceptron network.Look at my publications. This desing and implementation goes back to 1992 and is applied in many complex industrial applications, that I did in the past decades.