Please share this quick survey of the fundamental concepts and practices that drive the effectiveness of medical lab quality/risk management with as many laboratory professionals as possible https://www.surveymonkey.com/r/QC_Baseline1.

The purpose of the survey is to determine if there is a widespread inconsistency in QC theory and practice. Such a gap would expose patients to significant avoidable risk of missed, delayed, or incorrect patient diagnosis due to lab errors and add costs of unwarranted repeat or additional tests from 'false positives' to patients, insurance companies, and healthcare systems.

Participant scores in this quiz will be revealed immediately upon completion and you are welcome to participate in the review of results and creation of one or more ADLM posters. Abstracts are due Feb 15th!

Here’s one example of an interesting question where participants are not agreeing:

Assume the QC results in an analytical process have a stable error rate with 5% of results reported erroneously high.

If the analytical process 'fails', how high would the error rate need to be to detect, with REASONABLE certainty, a QC result that is outside of 2SDs (ie, violates the 1-2s rule) by analyzing only one QC sample in a single QC run? Is the answer just slightly over 5% ... or would 100% of patent samples need to fail before the lab would know?

LabVine and AWEsome Numbers Inc. have partnered to create Lablogic Innovators to explore new concepts in risk management and solve the problem of this QC Gap. A limited number of spots are available to participate in this interactive group https://awesome-numbers.com/news-and-events/

Questions? Comments? Want to join me to review and publish results?

[email protected]

More Zoe Christine Brooks's questions See All
Similar questions and discussions