Imagine I have an ideal signal, V, and a noise, W, so that my observed signal is S = V + W. I observe a peak in the range of my measurement, which is associated with a peak in the ideal signal, but not quite the same. So there is an error in the observed peak, as depicted in the figure.
My question is: How to calculate the error of the peak detection based on the analytical features of the ideal signal and the statistical characteristics of the noise? Let's say I have a candidate for the ideal signal which comes from calculation, so for instance I know what the second derivative of V is at the peak. Also, I can extract e.g. the distribution and auto-correlation of W from another region in the observed signal where I expect constant V. Now how can I put these pieces of information together to calculate the error?
I actually expect lots of literature on such basic concepts in signal analysis and noise characterization, but I was not successful in finding any relevant resources. I appreciate any help in this matter by citing literature or providing solution.