a positive predictive value of 50% would mean that 50% of people with a positive test would have the disease.
a positive predictive valus of 90% would mean that 90% of people with positive tests have the disease and thus money is not being wasted on picking up false positives.
a PPV of 20% would mean that a large proportion of money is being wasted on false positives as only 20% of people with positive tests have the disease..
Hi James, Thanks for your answer. I'm about to publish a paper and one of the points is that for screening test, PPV lower than 50% should be a warning for not using the test but one of the editors wants to see the calculations to support this. Not sure how to probe it.
Probably my sheer ignorance in the matter. I thought a PPV lower than 50% might be not good for a test. I found papers saying PPV=20 is poor but =90 is good. I think my question is about how to define an acceptance criteria based on PPV. Otherwise it’s just a number in my paper with no actual meaning.
I guess part of the difficulty is that, unlike Sensitivity (Sn) and Specificity (Sp), PPV is not a "stable" characteristic of a diagnostic test. It changes depending on the prevalence of disease in a given population. Given a test with Sn=80%, Sp=95%, PPV at different prevalence levels would be as follows:
Prevalence=1%, PPV=14%
Prevalence=20%, PPV=64%
Perhaps a better approach would be to specify the threshold Sn, Sp levels of an acceptable diagnostic test, given one or several assumed levels of disease prevalence. Similar to what the CDC did for COVID Antibody tests (See Table 1): https://www.cdc.gov/coronavirus/2019-ncov/lab/resources/antibody-tests-guidelines.html#table1