I want to explain the case: I collected 100 observations from real tests on concrete samples, five factors taken in consideration, f1,f2,f3,f4,f5 and the strength is recorded too f6. Now upon these data I want to create a model to represent this set of data. I mean, if I have the five inputs I can predict the f6 .. For this purpose I used neural network as a prediction tool. but I didn't used uncertainty or sensitivity analysis. My question is this: Why I need these two analysis? by the way I don't need sensitivity to increase the imprecision of the data, the data is okay.
There is experimental uncertainty in f1-f6, due to e.g. sensor precision. However, the measurement error in f-f6 may be much larger than indicated by the figures of sensor precision in data sheet, e.g. due to temperature variations or poor positioning of the sensors or if you perform an indirect measurement to estimate some of the f:s. In addition, in most practical situations there are uncertainties in the test setup itself. That is, if you do 100 repeated tests, then f1-f6 will not be exaclty constant, probably rather they should be represented by probability density functions (PDFs) (aleatory uncertainty) and/or intervals (epistemic uncertainty).
If the experimental uncertainty is not considered when developing a model, such as a neural network, there is a risk for overfitting and that the model unconsciously is calibrated to include experimental uncertainties.
In addition to experimental uncertaintiy, there is also different types of uncertainties related to the model itself: 1) Input/parameter uncertainty, 2) numerical approximations, and 3) model form uncertainty due to slected model structure, assumptions, simplifications.
If these model uncertainties are not considered when using the model for predictions, the predictions cannot be said to have high credibility, at least not in a formal context.
High-consequence areas like nuclear power and climate have come far with advanced methods for uncertainty quantification (UQ). Personally, I'm working on approximate/simplified methods for UQ of simulation models of dynamic physical systems, for use in early phases of system development (aeronautical context).
The answer of Magnus Eek very nicely suggests the approach on uncertainty. On the sensitivity thing, training the neural network inherently reveals insights on the relative sensitivity of the output quantity to the input predictors. An illustration is in the results section of https://www.researchgate.net/publication/258046876_Optimization_of_an_artificial_neural_network_used_for_the_prognostic_of_cancer_patients
Conference Paper Optimization of an artificial neural network used for the pr...
first order sensitivity analyses of an ANN is equivalent to calculate the first order partial derivates of the trained neural network. This gives a valuable insight into the trained network. I present a paper end of May at a conference, where I go a step further to quantify the sensitivity of the input-output relationship typically for multi-input and multi-output ANNs. This derived matrix, which I call "Dependency Matrix", gives quick insight if the model parameters are relevant and what their impact is on dependent variables. It can also be used for dimensionalty reductions.
Anyone of you would like to participate with me in publishing a paper by doing the sensitivity analysis for the data it will be great. I'm really don't know how to do this part of the work.... So, can u please send me an email if u like to participate, so as I can send u the data. Regards.
You can use a Kalman type filter algorithm or software for data elemination. If this one is done, unnecessary datas or inputs/outputs can be eleminated. Filter including papers, about neural networks, will be usefull for you.