If you allow generative AI in your classroom, what kind of assignment or exam still gives you authentic evidence of individual student learning—and how do you grade it?
Assignments that still show authentic learning with AI allowed include reflections, process-based work, case studies, oral defenses, creative projects, AI critiques, and collaborative tasks—because they reveal personal reasoning and application beyond AI output.
When AI tools are allowed, assignments that still demonstrate authentic student learning are those that prioritize critical thinking, creativity, and personal engagement over mere fact retrieval. Tasks such as project-based work, where students design, experiment, or solve real-world problems, ensure active involvement that AI cannot replace. Personal reflections, journals, and essays that connect concepts to individual experiences allow students to express unique insights. Case studies, scenario analyses, and open-ended problem-solving exercises require judgment, reasoning, and justification, fostering skills that AI can support but not replicate. Similarly, original research, fieldwork, and creative outputs like art, designs, or multimedia projects demand originality and interpretation. Ultimately, authentic learning persists in assignments that challenge students to apply knowledge, innovate, and demonstrate personal understanding, with AI serving as a tool rather than a substitute.
Excellent suggestions Mohammad Asif Chouhan. Thank you. Have you had personal experience using these? I’m curious how your experience went. Any noticeable increase in engagement or comprehension?
Rafael Dean Brown Thank you! Yes, I have tried integrating reflective journals and AI critique tasks. I have found they really highlight students’ own thinking while allowing them to use AI responsibly. It’s definitely been an adjustment, but the results feel more authentic.
Atif Shahzad How did you grade the AI critique tasks? Was it based on the number of mistakes identified, quality of the comments on the AI work, or any other factor?
Rafael Dean Brown I graded AI critique tasks mainly on the quality of analysis - how well students explained errors, suggested improvements, and linked their critique to course concepts, rather than just counting mistakes.