We are testing a new diagnostic tool and comparing it to the actual gold standard for this diagnosis.

Briefly, we examined 25 patients with the new diagnostic tool (test A) and the gold standard diagnostic tool (test B). Test A gives a positive result or a negative result (no variability or range in numbers, just "positive" or "negative" as outcome). We then performed test B which also gives a "positive" or "negative" results and which is considered the true result since this is the gold standard diagnostic tool.

All patients having a positive result on test A (n=18), had a positive result on test B (n=18).

Of all patients having a negative result on test A (n=7), 5 were negative on test B but 2 were positive on test B.

Overall, 23 patients had the same outcome on test A and test B, 2 were different, which means that our new diagnostic test has a sensitivity of 92% (if we consider test B to have 100% sensitivity).

Can you recommend me any more statistics on this data, to draw conclusions? Any idea to look at this data from another perspective? Any help or insight is appreciated.

Thank you

Similar questions and discussions