Generally ANN adjusts such errors/noise in the training phase. so i think if the error is not approaching a critical level, then you may not need error/noise removal step. if the error is approaching critical level, then you may rethink the physical operators
Use the same amount of noise in the testing phase, as you have used and the learning phase. If learning is OK then the test error and the learning error should be similar.
Any data set in real application has some kind of errors like noise,missing value and etc...,before appling any techniques of machine learning or data mining you should indicate what do you want and what you are going to get from data then according to that decision you should prepare your data set and then apply algorithms,in reality at least 70% of time and cost and effort of any project on machine learning or massive data mining are belong to this phase :)
There will always be an amount of noise in your (realistic) data. The problem with this is that an ANN may easily 'overtrain', that is fit the model to the noise in the training set. Prediction of 'unknown' data (data not used for training) will be poor. This all depends on the amount of noise in relation to the valid data (signal) and the number of patterns (subjects) you have.
Although there have been many approaches to building generalising and noise-tolerant ANN's, the best approach in my experience (especially with relatively small of highly noisy data sets) is to rely on the leave-one-out method and only use the leave-one-out results to evaluate and compare ANN models.
i assume that your question is related to the following setting : the input data is the result of some noisy measurement while the output data is noiseless ; the noise level associated with the measurement is known at training and deployment time ; differences in noise levels accross samples might come from external uncontrolled factors
including information about the noise level is quite reasonable in the training phase, as one or many more input variables ; after all, you would expect very noisy samples to play a less important role in the training that "clean" samples : feeding this information to the NN, you let it decide whether to use it or not
obviously, this is only efficient if you have access to this "noise level" information at depoyment time !
ANN are supposed to be able to deal with noisy input data. That is, if the training set is large enough, and the direction of the errors is reasonably random, the errors get cancelled out and the ANN learns the real input-output relationship. If there is instead some kind of systematic bias in the input data error, I am afraid that no learning procedure can recover the correct relationship. Therefore, if your training set is reasonably large, I think you shouldn't worry about errors. Alternatively, you might want to use some kind of fuzzy neural network.
In identifying any relationship with any way you should remove bias from your data for reasonable identifying and if you have error in your input you must have a large data set for best identifying.