With the increasing use of AI-assisted tools, such as large language models like ChatGPT, to write, revise, or edit articles, there remain significant discrepancies between subjective assessments of creativity (originality, fluency, flexibility, and refinement) made by human reviewers and objective evaluations made by machines. These discrepancies exist between public assessments and expert evaluations, leading to substantial biases in creativity assessments that may hinder the advancement of scientific research. To address this issue, our team aims to develop an AI evaluation model that aligns closely with human subjective reviews.Establishing this model not only reduces the biases caused by the subjectivity and lack of expertise in human assessments but also reduces the cost of inviting expert reviewers.