Attached please find a recent publication which covers the answer to your question.
Neural Comput & Applic (2014) 25:859–869
DOI 10.1007/s00521-014-1573-5
Extracting the contribution of independent variables in neural
network models: a new approach to handle instability
Juan de On˜a • Concepcio´n Garrido
Abstract One of the main limitations of artificial neural networks (ANN) is their high inability to know in an explicit way the relations established between explanatory
variables (input) and dependent variables (output). This is a major reason why they are usually called ‘‘black boxes.’’ In the last few years, several methods have been proposed to assess the relative importance of each explanatory variable.
Nevertheless, it has not been possible to reach a consensus on which is the best-performing method. This is largely due to the different relative importance obtained for each variable depending on the method used. This importance also varies with the designed network architecture and/or with the initial random weights used to train the ANN. This paper proposes a procedure that seeks to minimize these
problems and provides consistency in the results obtained from different methods. Essentially, the idea is to work with a set of neural networks instead of a single one. The proposed procedure is validated using a database collected from a customer satisfaction survey, which was conducted on the public transport system of Granada (Spain) in 2007. The results show that, when each method is applied independently, the variable’s importance rankings are similar and, in addition, coincide with the hierarchy established by researchers who have applied other techniques.
The different input contribution can also be modeled using Committee machines, where each input variable is weighted (probabilistic,fuzzy) based on how much difference they should make to the result. I shared some ANN code on my github - https://github.com/newadays