I am planning a research project with research questions. What's the ideal way to conduct the peer feedback and subsequent revision?
Main Research Question: Does the use of AI-assisted editing tools lead to a significantly greater long-term improvement in students' editing skills compared to non-AI methods?
CONTEXT: In a business writing course with approximately 80 students, all in-class writing must be completed in Blackboard’s Respondus LockDown Browser to preserve originality. Since the widespread adoption of AI tools, ensuring that students produce authentic work has become increasingly challenging.
Currently, two approaches are possible to conduct the peer feedback:
Option 1: Students compose their essays in Blackboard’s Test tool with Peer Review enabled, allowing each draft to be sent to another student for feedback before being returned to the original author. However, Blackboard does not support direct revisions within the same peer review workflow. To revise based on feedback, students must open a separate assignment in a second browser window—making it impossible to enforce LockDown Browser restrictions and creating opportunities for AI misuse. The question is how to require authentic, in-browser revisions while preventing AI-assisted rewriting.
Option 2: Students write essays by hand in class. Drafts are collected, redistributed the next session for peer feedback, and then returned for revision. This eliminates AI threats but risks discomfort among students about peer recognition of their work and the nature of received feedback.
Is there a viable third option that maintains AI safeguards, enables authentic revision within the same secure environment, and preserves the benefits of peer feedback?
ABSTRACT: The increasing integration of AI-assisted feedback tools such as ChatGPT and Grammarly in academic writing instruction raises critical questions. These questions are regarding these tools’ long-term impact on students' autonomous editing skills. Existing research predominantly focuses on the immediate benefits of AI feedback on writing quality. However, little is known or researched about regarding how sustained use of such tools influences students’ ability to self-edit over time. This study employs a quasi-experimental design to examine the longitudinal effects of AI-assisted feedback within the context of a 16-week undergraduate academic writing course. Two gender-balanced classes are assigned to an Experimental Group (with AI-assisted feedback intervention) and two gender-balanced classes are assigned to a Control Group (with structured peer feedback intervention). Each of these with interventions embedded into iterative writing tasks aligned with the course’s assessment framework.
A novel aspect of this research is the incorporation of a feedback “fade-out” phase in later writing assignments, designed to assess the extent to which students internalize editing strategies and develop independent revision skills after repeated AI or peer feedback interventions. Additionally, the study captures students’ perceptions of feedback effectiveness through reflective logs, analyzing how these perceptions correlate with actual improvements in writing performance. Gender will be examined as a moderating variable to explore potential differential effects of feedback type on male and female students. The findings aim to provide empirical insights into the pedagogical implications of AI-assisted feedback for fostering long-term editing autonomy in academic writing education.