I assume your question is about a health-related intervention. The CEA equation is (cost of intervention - change in medical costs - other monetized benefits)/change in non-monetized outcome. Typically the nonmonetized outcome is expressed as deaths averted, disease cases prevented or cured, DALYs prevented, or QALYs preserved. Powering with a prevalence-based method requires that you have data on the change in medical costs and in DALYs/QALYs over a time period. Often, however, (a) unless you've conducted a randomized control trial, those data are not available or the change in medical costs/DALYs between intervention and comparator is quite uncertain. (b) even with a RCT, because the standard error of mean monthly medical costs typically exceeds the mean, the trial is likely to be powered to detect changes in endpoints with much lower variance (e.g., DALYs, depression days, mean survival time) and be unable to accuately assess the change in medical costs using the prevalnce-based data on the study cohort, and (c) more importantly, medical costs from a health condition often extend over a person's lifetime. If the intervention prevents cases or reduces their lifetime severity, the medical costs and DALYs should be assessed over the person's remaining lifespan. They should be incidence-based, not prevalence-based. But no one could or would want to track a cohort of patients until they die before conducting a CEA. The alternative is to model the costs and outcomes based on the prevalence-based and incidence-based data available. For example, to build a model, with a prevalence-based survey, you might estimate the average treatment costs and DALY levels for Stage 2 breast cancer by year after first detection. Combining that information with a life table tailored to breast cancer would let you model the mean and standard error of lifetime DALY gains and medical cost changes absent the intervention that you are evaluating. The obvious downsides of modelling include: (a) it requires skill to build a good model of the changes resulting from the intervention, (b) the data needed for the ideal model often are extensive, (c) lack of needed data of high quality often forces the modeler to turn to less objective sources - either assumptions or flawed data, but at least these assumptions are explicit and the sensitivity of the results to alternate choices is readily calculatable, (d) interventionists like to see cost savings data specific to their cohort, have to be educated about why they lack the power to use that approach. Hope this helps.