In recent years, the academic landscape has undergone a profound transformation in the ways knowledge is produced and textual engagement occurs—driven largely by the rise of generative artificial intelligence models, particularly large language models (LLMs) such as ChatGPT. These models go beyond mere linguistic processing; they generate complex outputs ranging from literature reviews and hypothesis generation to the simulation of analytical reasoning patterns found within specialized disciplines. This new reality calls for a reassessment of the epistemic roles these models can occupy—whether they should be seen merely as assistive tools or as participants in the production of knowledge.
Within this context, a philosophical and methodological question arises—one that is increasingly relevant at the intersection of technology and critical thought, demanding reflection on the boundaries of agency, subjectivity, and academic accountability.