With the increasing use of AI-assisted tools, such as large language models like ChatGPT, to write, revise, or edit articles, there remain significant discrepancies between subjective assessments of creativity (originality, fluency, flexibility, and refinement) made by human reviewers and objective evaluations made by machines,and public assessments and expert evaluations, leading to substantial biases in creativity assessments that may hinder the advancement of scientific research. To address this issue, our team aims to develop an AI evaluation model that aligns closely with human subjective reviews. Guided by specific structures and prompts, this model will be trained using large amounts of original data (e.g., participant-generated texts) and corresponding expert ratings, ensuring that the AI’s evaluations correlate highly with expert judgments. This model could replace expert evaluations in future research. Establishing this model not only reduces the biases caused by the subjectivity and lack of expertise in human assessments but also reduces the cost of inviting expert reviewers. Therefore, if your research involves participant-generated textual materials with corresponding authoritative ratings, we would greatly appreciate your contribution of the original data to assist in advancing scientific research. As a token of gratitude, we will grant you permission to use the AI-based creativity evaluation model in future work.Thank you very much.

More Liang Haotian's questions See All
Similar questions and discussions