23 March 2020 7 3K Report

For a while, I have been experimenting with improving the linearity of my method. I have noticed that increasing my deuterium labeled (3x) internal standard to a concentration 2.5 times the ULOQ concentration, improved linearity a lot. However, I do not understand why. Does anyone have an explanation for this? (I am using LC-MS/MS)

The range I am working with is 10 - 20,000 ng/mL, which is quite a wide range to aim for. Adding 10 µL 50,000 ng/mL internal standard to 100 µL of my calibration standards, gives a near perfect linearity.

From the previous questions I have asked, I now assume that the most likely reason for saturation (in my case) is the formation of dimers/trimers etc. for the higher concentration standards. That can be the reason why, when I use isotopologues, the linearity does not improve at all.

How is it possible that increasing the SIL-IS, that has the same retention time as the analyte, improved linearity? I would think that it would actually induce more multimer formation, but this is obviously not the case..

Similar questions and discussions