I would like to study rates of ICU mortality in patients initially seen at our freestanding ED compared to our main tertiary care ED. We are trying to determine an appropriate sample size.
That is a very deep question. Taken to the extreme, any life saved is significant but that is not a measureable change scientifically except in situations where mortality is rare.
A primary issue would be what the baseline mortality rate is and what change would satisfy most of the interested parties. The higher the baseline rate the more change will be desired, requiring a bigger improvement to be viewed as significant. If the baseline rate is 1 per 1,000,000 then saving one life in a population of 10,000,000 will be generally seen as useful. If the baseline rate is 1 per 100, the saving of one life is not such an achievement.
What would be clinically significant will depend on a wide range of factors, few of which would be scientific. Legislation, government policy, public opinion, company policy are but a few things that would contribute to the conclusion. I am afraid there is no simple answer.
To make it simple the larger the difference in mortality the smaller the calculated sample size. The you need to consider both alpha and beta value. The smaller the alpha or the larger the beta value the larger the sample size. It all up to your intention.
The issue of clinical significance is often a pragmatic one. As stated, any life saved is significant BUT it doesn't help you determine the size of your sample for a prospective study. Your starting point has to be your current mortality rate. if your ICU is doing its job of caring for the sickest patients in the hospital, then your mortality rate might be around 50% or .5.
Strictly speaking, the important clinical difference should be the smallest difference that would make you change your practice. This remains a difficult estimate to make. The most commonly used calculable estimate is known as the standardised mean difference, d = (X1- ͞X2)/SD where the difference from your control mean (͞X1- ͞X2, desired effect size) is altered until the difference divided by the control standard deviation represents an appropriate value for the primary outcome measure of the study. A general (statistician's) view of the values might be:
• ≤ 0.2 - a very small effect, of negligible importance
• 0.5 - of moderate importance
• 0.8 - a large difference of considerable importance
• ≥ 1 - can’t be ignored
However, as you increase the difference, you may reach the world of unachievable results and while the number required in a study to detect a true difference of 20% will be less than to detect a true difference of 10 %, you are increasing your beta error for smaller real differences. I think you should be aiming at between 0.2 and 0.4 on the above scale. If I remember correctly the ICU world went into a spasm of ecstasy when it was believed that activated protein c was associated with a reduction of sepsis mortality of about 14%. So a reduction of 10 to 20% might be a possibly achievable goal with a really effective treatment. So, as the sd is p(1-p) and your control is .5. reducing from 0.5 to 0.45 would give you a d of 0.2, which for mortality is important. However, to have 80% chance of detecting this, you would need about 1600 in each group. If you think your treatment effect on mortality is twice this, reducing from 0.5 to 0.4 will give you a d of 0.4 and a sample size of 408 per group.
My approach to the problem would be quite simple. First I would decide what difference I would find significant either subjectively, based on what makes a difference on policy or based on published literature. I would then use this to calculate sample size, which is easy enough. The possible sample size that can be recruited would either give sufficient power to answer the question or it won’t be. If it is not, maybe the initial question needs to be either abandoned or rephrased so as to make descriptive the primary outcome instead of a hypothesis testing one. Hypothesis testing could then be done as a secondary outcome. Since the expected difference is not known, the study may still have the power to detect a statistically significant difference if the effect is large enough, although it should still be borne in mind that if no statistical difference is found, this may be due to insufficient power for the observed difference. I also think it is important to keep in mind that the sample size is often calculated with the assumption that random error is all we are concerned about. That is only the case in randomized studies. To me this sounds like an observational study and as such confounding should be the most important consideration. Understandably, most sample size calculations hardly take this into account as confounders are usually considered for inclusion into a model at the analysis stage. Confounder adjustment can make a huge dent on the initial estimated power. It is quite important not to undermine good descriptive studies where no data exist as these give the ability to construct future hypothesis testing studies with better-informed sample sizes.
1-Definition ; death in ED or within 1 hour fro hospital admission thought ED , do not count dead on arrival even if resuscitated
2-Types :evaluate the death in relation to Triage Priority
3- No benchmark for ED mortality , that I am aware about
4- Mortality rate ED : this any way is a key performance indicator (KPI) that should be related to each institution and the community served.Review your ED mortality ( monthly and yearly) for comparison, it is important indicator parameter