12 December 2018 14 4K Report

I am working developing/validating an impurity HPLC method. It’s a pretty straightforward HPLC method, with a main peak (monomer) and an impurity peak (dimer).

To evaluate linearity/accuracy/range per ICH guidelines, the classic approach is to take an ultrapure sample and to spike with known amounts of the dimer or with known amounts of a sample with high dimer content. The sample concentration should remain relatively constant (i.e. monomer peak area is constant). The spike recoveries can then be used to evaluate method linearity/accuracy.

Another approach I’ve heard used in method development is to take a sample with high dimer content (e.g. 2%) and serially dilute this sample until you no longer get accurate + precise dimer quantification. For example, a 1:20 dilution may be the most you can dilute this sample down while still getting accurate + precise dimer quantification. You could then divide 2% by the dilution factor to get the lower end of your assay range (i.e. LLOQ; 2%/20 = 0.1 %).

I’m not used to the second approach (the serial dilution). Can anyone tell me what I might be overlooking with this approach? I know it doesn’t follow ICH guidelines, but from a scientific point of view, why or why is it not appropriate?

Similar questions and discussions