2) Validation of Chromatographic Methods" (CDER, 1994)
3) Analytical Procedures and Methods Validation for Drugs and Biologics
U.S. Department of Health and Human Services Food and Drug Administration Center for Drug Evaluation and Research (CDER) Center for Biologics Evaluation and Research (CBER) July 2015 Pharmaceutical Quality/CMC
The specific steps and acceptance criteria are determined by the test's purpose, but general parameters are evaluated in every validation. Both the International Council for Harmonisation (ICH) and the United States Pharmacopeia (USP) provide guidelines for validating analytical methods, which are crucial for ensuring data reliability in the pharmaceutical industry
Method validation steps (ICH Q2(R1) and USP )
The following steps outline the key parameters that must be validated to confirm a method's suitability for its intended purpose.
1. Specificity
Definition: The ability to measure the analyte accurately and without interference from other components that might be present in the sample, such as impurities, degradants, and excipients.
How to test: Challenge the method by analyzing samples containing known impurities or potential interfering substances and verify that they do not affect the analyte's measurement. For stability-indicating methods, forced degradation studies are performed to ensure the method can separate the active ingredient from its degradation products.
Acceptance criteria:The analyte peak must be well-separated from all other peaks. For identification tests, the test must confirm the identity of the analyte. For impurity tests, the method must separate the analyte from all potential impurities.
2. Accuracy
Definition: The closeness of the measured value to the true value. It is often expressed as the percentage of recovery.
How to test: Use a fully validated reference method or spike a placebo sample with a known amount of analyte to demonstrate consistent recovery across the method's range.
Acceptance criteria: For assays, the typical acceptance range for recovery is 98–102%, though the exact criteria should be justified based on the analyte's concentration and complexity.
3. Precision
Definition: The degree of agreement among individual test results when the method is repeatedly applied to multiple samples. It is evaluated at three levels:Repeatability: Precision under the same operating conditions over a short time interval. Intermediate precision: Precision within a laboratory over different days, with different analysts, or using different equipment. Reproducibility: Precision between different laboratories, typically relevant for standardization.
How to test: Analyze multiple aliquots of a homogeneous sample. This is often done by preparing and testing at least six replicates at 100% concentration for repeatability testing.
Acceptance criteria: Typically expressed as the relative standard deviation (RSD). For assays, a repeatability RSD of ≤ 1% to 2% is common, while for impurity determinations, a higher RSD (e.g., up to 10% to 15%) may be acceptable, particularly near the quantitation limit.
4. Linearity and Range
Definition:Linearity: The method's ability to produce results that are directly proportional to the concentration of the analyte within a given range. Range: The interval over which the method consistently demonstrates acceptable accuracy, precision, and linearity.
How to test: Prepare and analyze a series of at least five standard solutions across the concentration range to be validated. Plot the analyte response versus concentration and perform a statistical analysis, such as linear regression.
) for the regression line must be high, typically ≥ 0.995 for most chromatographic methods. The range is confirmed to cover the required operating concentration. For example, a range of 80–120% of the target concentration is standard for assays.Acceptance criteria: The correlation coefficient (𝑟2📷r2r squared
5. Detection Limit (LOD) and Quantitation Limit (LOQ)
Definition: LOD is the lowest detectable analyte amount, while LOQ is the lowest quantifiable amount with acceptable accuracy and precision.
How to test: These can be determined using signal-to-noise ratios or calculations based on the standard deviation of the response and calibration curve slope.
Acceptance criteria: A signal-to-noise ratio of at least 3:1 is typical for LOD, and at least 10:1 for LOQ. These are particularly important for impurity and limit tests.
6. Robustness
Definition: The method's reliability when subjected to minor variations in parameters.
How to test: Introduce small changes to parameters (e.g., pH, flow rate) and check the impact on results.
Acceptance criteria: Results should not be significantly affected by these variations. An RSD typically ≤ 3% is often expected.