In my research, I propose a software that helps researchers to conduct a randomized controlled trial to evaluate interventions in the educational context, somewhat similar to ASSISTments / E-TRIALS, Terracotta and UpGrade.

I need to run an experiment where researchers will use this software and a manual alternative, where the activities performed in the software are done manually, such as sending emails and randomization, for example. And then use the Technology Acceptance Model to evaluate the software in comparison to the manual alternative.

The manual part is becoming a bit complex because the researcher is required to send e-mails, create a pre-test, post-test and interventions, randomize the research groups, among other activities. If the activities are too long and laborious, I am afraid that this could harm the experiment.

Is it possible to perform this experiment in another way, with another design? Is there a design that is commonly used for this kind of experiment, to evaluate software with a manual alternative?

(Sorry for mistakes in the writing, I am using the translator)

More Leonardo Holanda's questions See All
Similar questions and discussions