Studies comparing interventions can sometimes be studied using a non-randomised design. Though imperfect, the inevitable group differences in prognostic characteristics from a non-randomised comparison of interventions may be amenable to multivariable regression. However studies comparing assessment methods are far more prone to bias. Such studies will involve treatments being given in response to assessment findings, these treatments mediating the outcomes.
In an RCT approach to 'test and treat' studies, it is easy to ensure that the same treatments are given for the same assessment indications in each group. For example, in both groups, a positive test will lead to treatment X and a negative test will lead to no treatment.
However in a non-randomised design this is far less likely to happen, due to the observational nature of the approach.
Furthermore, it is difficult to see how the complexity of a test and treat study would be amenable to multivariable adjustments. The non-randomised approach is thus likely to create far greater bias in a 'test and treat' approach than you'd observe in an interventional design.
So should test and treat studies always, without exception, be RCTs?