Can't it be the other way? For an intervention to be provided in a educational study, it is good to assume the low scorers into the treatment grp so that they get benefitted more. but why is it done the other way in this design?
No; either way can work. In fact, within educational research, the first major applications of RD were applied for Title I evaluation efforts, wherein the lowest-scoring students were those selected for intervention, and their "growth" to be compared to those of the (higher-scoring, but no special treatment) peers.
Thistlethwaite, D., & Campbell, D. (1960). Regression-discontinuity analysis: An alternative to the ex post facto experiment. Journal of Educational Psychology, 51, 309-317. doi:10.1037/h0044319
David Morse Dr. Morse, I know I have done this before but I again cite Box, Fisher's student and distinguished professor of statistics at U. Wisconsin @ Madison:
given what Box says and Fisher's famous quote, " there is no experimentation without randomization." Would you really want to assign humans in this manner knowing that you are developing a treatment for humans. i.e. Would you take a drug whose clinical trial was performed in this manner? I look forward to your response. Best wishes, D. Booth
Nobody calls these designs experimental, so I don't think we're in disagreement.
The famous chapter (then stand-alone text) by Donald Campbell and Julian Stanley, Experimental and quasi-experimental designs for research (1966), established two classes of designs that didn't meet the gold standard (randomized assignment to treatment) of "true experimental" designs: Quasi-experimental and Pre-experimental.
Campbell & Stanley's contribution to the conversation was to compare designs with respect to their ability to mitigate the influence of nuisance/extraneous influences with respect to alternate explanations (other than treatment) for the observed results. These later became referred to as rival hypotheses. As well, limitations with respect to generalizability are discussed. With this presentation, one could understand the limitations associated with any specific design choice.
In situ, there are often restrictions on one's ability to invoke randomization. In the case of the first major application of the R-D design that I recall, that of one of several options for handling quantitative evaluations of impact of (then) USOE Title I (compensatory education) projects. The fact was, USOE would not permit schools to fail to serve students whose scores fell below a given threshold, thus, randomization wasn't ever going to be possible. Alternate schemes had to be identified, and the regression-discontinuity model was one.
As a final note, the Tobacco Research Institute (a "research" organ funded by tobacco companies) often complained that there was no "scientific evidence" that smoking caused lung cancer, because nobody ever had done a randomized trial with humans. Decades later, I can't think of a scientist that would dispute the causal link, despite the absence of "experimentation." In point of fact, in 1957, the U.S. Surgeon General declared that the U.S. Public Health Service's position was that the link had been adequately established; the more forceful conclusions of the subsequent 1964 report on Smoking and health were based on a systematic review of some 7,000 published studies (none of which were randomized with humans).
Somewhat similar conversations crop up when people discuss SEMs as "causal models." To this mistaken view, the advice of one wise person (Norman Cliff) was, if you want to know whether X causes Y, you have to wiggle X [and control everything else].