My question is in the context of testing and choosing the best weighted linear regression for a calibration model (calibration curves using an internal standard). How to explain the use of the weighting factor 1/y^2, for example, when working on bioanalytical/analytical method validation? What practical factors lead to a verification that the use of 1/y^2 or 1/y, rather than weights related to "x" (1/x or 1/x^2), better describes the data from calibration curves analyzed on an LC/MS-MS system? Signal saturation? Contamination? Accumulation of biological matrix debris? A slight trend in the data that favors 1/y^2?
In addition to the above, if incorporating the weighting factor 1/x^2 (where x reflects the nominal concentrations of the analyte) or 1/y^2 (where y reflects the response of the equipment) into the model results in similar performances in terms of the sum of the absolute percentage relative error (%RE) values, which one would be the most appropriate to be assumed in the calibration model? What conclusions can be drawn from the choice of one or the other with regard to the behavior of the data (and the nature of this behavior)?
I must say that I don't have a strong background in Statistics, so I apologize if I've expressed anything incorrectly.
Thank you