Recently I came across an article titled "A retrospective randomized study of asthma control in the US: results of the CHARIOT study." I couldn't get the full article, but I have some doubts that an RCT can be retrospective?
I can see that there is still some confusion apparent in the above discussion over the proper meaning and application of the term "retrospective RCT", so given that I educate in Evidence-based Medicine (EBM), let me try to clarify, first, what is the proper interpretation of the term as used in dozens of studies in the peer-based literature, and second, what the study cited (by Philip Marcus at St. Francis Hospital and colleagues, in the Current Medical research and Opinion journal (2008) [1]) intends by using that designation.
It must be understood that the term "retrospective RCT" (retrospective randomized controlled trial), is not an misnomer, but rather is a widely established shorthand: namely for a "retrospective analysis" of (one or more) RCTs. An recent example of such a "retrospective RCT" is the "Lilly Humalog" Trial [2] that retrospectively analyzed the efficacy and safety of insulin lispro (Humalog) in geriatric diabetic patients, reviewing and appraising the findings of seven RCTs of Humalog in non-age stratified diabetic populations, to determine from re-analysis of the individual patient data (including patient age) whether benefit accrued comparably to >65 year patients as it did to
Not sure of the context of this - but I can't see how an RCT could ever be retrospective or 'ex-post facto'. An RCT might have been conducted 'in the past' and reported later on - but it is still an RCT - and that does not make it retrospective. Retrospective designs are most appropriate for observational and/or case-study approaches.
I think you are mislead by the title. Control here means the control over the asthma. I checked the abstract of this study as I was intriged by your question. The study took a random sample of asthma patients and assessed for example the prevalence of uncontrolled and controlled asthma. They also looked at the history of the disease in a patient. Thus it was never a randomized controlled trial. Hope this helps!
I can see that there is still some confusion apparent in the above discussion over the proper meaning and application of the term "retrospective RCT", so given that I educate in Evidence-based Medicine (EBM), let me try to clarify, first, what is the proper interpretation of the term as used in dozens of studies in the peer-based literature, and second, what the study cited (by Philip Marcus at St. Francis Hospital and colleagues, in the Current Medical research and Opinion journal (2008) [1]) intends by using that designation.
It must be understood that the term "retrospective RCT" (retrospective randomized controlled trial), is not an misnomer, but rather is a widely established shorthand: namely for a "retrospective analysis" of (one or more) RCTs. An recent example of such a "retrospective RCT" is the "Lilly Humalog" Trial [2] that retrospectively analyzed the efficacy and safety of insulin lispro (Humalog) in geriatric diabetic patients, reviewing and appraising the findings of seven RCTs of Humalog in non-age stratified diabetic populations, to determine from re-analysis of the individual patient data (including patient age) whether benefit accrued comparably to >65 year patients as it did to
This study "A retrospective randomized study of asthma control in the US: results of the CHARIOT study" is not a trial, it is an observational study. I do agree with Gabrielle in that you are mislead by the title.
This is a clear example of a situation where the editors and reviewers have failed the readership. Such a misleading title should never have got to publication.
I would add one thing more, people who aren't research oriented tend to confuse the difference between a "randomized" study and a study in which participants were "randomly selected"
@Constantine - thanks for posting the paper. This seems to be a purely retrospective observational study as patients were never randomised to any intervention. Therefore to me this does not seem to be a re-review of previous RCT data, either via meta analysis (multiple RCT data or results combined) or a re-analysis of raw RCT data including extra variables or outcomes (ie testing new hypotheses). I don't think it should be described as a retrospective RCT and I don't think the word randomised should be included in the title at all. To me this is an observational study intended to elicit the prevalence of controlled asthma management in a large population, where they have described the selection technique to obtain a random sample.
RCT means a trial (a test of an intervention to look for some effect) where patients are prospectively randomised (allocated in a systematically unordered and random manner) to either the intervention or the Control (either placebo or standard care)
There still appears to be some residual - and understandable - confusion here.
All study design and protocols, including randomization, were reviewed and fully approved by a central institutional review board (IRB), specifically the eminent New England IRB (NEIRB) which has handled some of the most respected indeed seminal clinical trials conducted, and which (NEIRB) is FDA-audited, AAHRPP-accredited (the "gold standard" of Institutional Review Boards/IRBs) - and which would not make a misjudgment re acceptable randomization and study design - and also OHRP (Office for Human Research Protections)-registered.
[DISCLAIMER: although I have reviewed - and taught EBM methodologies - concerning thousands of studies for evidence-based medicine (EBM) authorities over decades of EBM research and training, and although I have participated in innumerable Institutional Board Reviews/IRBs, I was NOT in any way involved with either the central NEIRB review nor the site IRB that were the approving authorities for this retrospective analysis of the CHARIOT protocol and study design].
A novel randomization approach - CHARIOT used an new and innovative retrospective, randomized, multidimensional, online data entry technique - was deployed, fully meeting the strictures of randomization in agreement with authoritative Cochrane, Briggs, and DARE standards among others - and also meeting trusted definitions of randomization used in the seminal texts on EBM (by David Sackett et al, Dan Mayer's text and dozens of others), and the study is categorized as an RCT by PUBMED, UNBOUND MED, and other medical search databases - with the PUBMED MESH publication type being "Multicenter Study, Randomized Controlled Trial, Research Support, Non-U.S. Gov't" as its PubType), and is fully described in the "Site Selection and Patient Chart Randomization" section of the paper (both sites and patient chart selection were randomized), discriminating cohorts of asthma-controlled patients versus asthma-uncontrolled patients as to treatment patterns (as well as resource utilization, pulmonary function, medications, and patient demographics) in accordance with the criteria of the NAEPP (National Asthma Education and Prevention Program) Expert Panel Report (EPR-3) Guidelines on the Diagnosis and Management of Asthma, for patients with moderate to severe persistent asthma, with asthma control as the primary outcome.
The confusions likely - and understandably - stem from:
(1) the novel randomization approach (never undertaken before this study),
(2) the somewhat condensed and less than sterling clarity of the paper's description of a randomization protocol not heretofore seen, and
(3) the fact that we must discriminate between the article ("the container") which is a retrospective analysis and hence a species of single-study meta-analysis, on the one hand, and the randomized CHARIOT study being retrospectively reviewed/meta-analyzed on the other: we are used to a retrospective review or meta-analysis of independent pre-existing and published studies, but in the case of this article, the retrospective analysis and the randomized study being analyzed are being co-published (the CHARIOT trial results have not been prior - to this article - published independently, that is why the CHARIOT trial does not appear in the article's references, but rather is being published concurrent with the retrospective analysis OF the CHARIOT randomized study, a distinctly uncommon event in publication, although it does happen).
I do however agree with everyone above, especially William Thompson's shrewd point, concerning the failure towards readership of both editors and reviewers in the highly opaque and misleading title, rather than insisting on "a retrospective analysis of an randomized . . . " rather than a "retrospective RCT"; the designation of "retrospective RCT" should have been better reserved for studies like the Lilly Humalog trial I cited above.
The CONSORT STATEMENT is the leading epidemiologically based consensus guidance on RCT methodology reporting in the peer reviewed literature. According to the definition in their website glossary, a randomised control trial is:
"An experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants. In most trials one intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals....."
The CHARIOT study is not experimental or a trial in any way, in that there is no specific study therapeutic or other intervention being or having been applied to the patients in a prospective experimental manner. Standard practice by the routine care providers has not been altered in any way. Furthermore individual patient consent is not described, rather the IRB review refers to the centres which have agreed to participate being able to subsequently collect data from patient charts for the study, thus this is a purely records based study. Also, the paper only refers to the work as 'the study' and nowhere that I can see suggests it is a "trial" or an RCT. I can also not see that this is a re-analysis of any sort or a meta-analysis of more than one study or that the CHARIOT Study itself was performed earlier or separately to this retrospective review. The paper clearly states this in the introduction:
"This report describes the findings from the Characterization of Allergic Asthma: A Chart Review In Moderate-to-Severe Disease to Assess Asthma Control, Allergies, Patient Outcomes and Treatment (CHARIOT) study. The primary objectives of the study were to determine the proportion of patients who have uncontrolled asthma among community asthma specialty clinics in the US and to describe the prevalence and characteristics of atopy associated with uncontrolled versus controlled asthma."
I think some of the confusion has come from the use of the term 'controlled' as part of the outcome of interest ie whether asthma in the US population is or isn't medically controlled according to the latest clinical definitions and guidelines NAEPP and EPR-3. This is distinct to the term 'controlled' as used in RCTs which indicates the trial includes subjects which are not exposed to the experimental intervention for comparison to determine the size of the effect. Although the algorithms used by PUBMED and other search and classification engines are undoubtably powerful, they are not infallible and the inclusion of the terms "control" and "randomised" in the title probably caused the studies inclusion under the RCT group (nb the classification algorithm is more likely to be inclusive rather than exclusive since most appropriately performed literature reviews need to source all potential papers and then exclude those not fitting specific review criteria). Bear in mind also that IRB application forms are primarily structured for research on patients rather than medical records reviews, therefore the terminology can become a little muddled, especially where the methodology as described in the application may be reused in the write up.
Also use of the word randomised in the title is somewhat misleading.
Fortunately there is an increasing appreciation of the knowledge to be gained from observational studies and records reviews and recent guidelines for the publication of such studies are now being developed: RECORD: REporting of studies Conducted using Observational Routinely collected Data (see website (http://www.record-statement.org). No doubt this will assist editors and researchers in how to best structure reports of these studies to the benefit of readers.
This is not the same thing but if you want a little humour (and a good example to use when teaching type II error) here is a randomised trial of a retrospective intervention, enjoy! Leibovici L. Beyond science? Effects of remote, retroactive intercessionary prayer on outcomes in patients with bloodstream infection: rendomized controlled trial. BMJ. 2001;323 (1450-1451.
Thank you for the clarification of my doubt, hope many researchers will relish your answer. I have one more doubt. Is the word 'Controlled' in RCT means
A. Control group or
B. Controlled environment in which the study carried out or
C. Because in RCT we randomly allocate the subjects to two or more groups, there by we are controlling the known and unknown confounders.
Umesha - control is all about the environment. The groups may be randomized - but the environment stays the same I.e. light, temperature, setting etc - bar the intervention itself.
Umesh, I agree with Dean in that to control means to account for all factors other than the intervention itself which may influence the size of the effect of the intervention. So in an RCT, patient selection exclusion criteria may try to limit variables known to influence the experiment, eg by limiting age or co morbidities or stage of cancer, etc and also by then accounting for unknown potential variables by using control group which allows subtraction of any other effects (such as placebo) from the actual intervention effect. Therefore RCT reports normally show a table of demographics and other known confounders or biasing variables to demonstrate that the control and intervention group are not different for these.
BTW, I think the closest approximation a retrospective observational study can come to a RCT, is through the use of 'propensity scores'. This statistical technique scores each patient on variables which predict whether the participant will or will not receive the 'intervention'. The intervention participants are then matched to the closest 'control' participant according to their mix of relevant factors. Matching attempts to mimic randomization by creating a sample of participants that received the intervention that is comparable on all the identified variables that effect allocation of the intervention in the real world, to a sample of participants that did not receive the intervention. I think some studies have suggested propensity score matched analysis approximates RCT results quite well, but it can still never completely equalise all potential confounders like randomised allocation of the intervention can. It is a good tool for situations where it might be considered unethical to use an RCT and also very importantly to measure effects of interventions in the real world where there are a lot of variables impacting allocation and potentially effecting outcomes, that would be excluded in a RCT.
Random sampling of medicals (as referred to in CHARIOT) is not a randomized trial. Consider two treatments recorded in an retropective database. If you take a random sample of each treatment, the data are still confounded as in the original database, because treatment allocation was not random. Such sampling is used to achieve a representative sample.
The other example of systematic reviews of published RCTs is also NOT a retrospective RCT.It's a summary of the individual studies and often one needs to apply meta-regression to account for variables present in some studies but not others that could also influence outcomes.
In RCTs, one first randomly allocates patients to treatment arm A or treatment arm B, then begins the treatment. By definition, randomization must precede treatment, so a retrospecitve RCT is impossible.
Thank you for the nice discussion. If either randomization or a control group is missing from a study, it may be quasi-experimental study design. Can a retrospective review of RCTs be considered as quasi-experimental study?