Hello everyone,
I conducted a series of experiments in which I compared the efficacy of an anticancer agent with different viability tests. I repeated five wells for all drug doses in each test. As a result of the experiments I conducted under the same conditions, the standard deviations of my MTT test were quite low, while the standard deviations of my SRB and DNA fluorescence tests were relatively high.
What is the reason for this situation? For example, it occurred to me that the SRB test involves too many washing steps. Or could it be related to the sensitivity of the device I'm reading the plates?
I am attaching the top part of the graph for you to see.
Thank you so much.