AI detection tool analyze features that are commonly present in AI-generated content but are less likely to be found in human-created content through Feature analysis. Some tool may do the statistical analysis or it may monitor the behavioral patterns etc.
If by "logic behind it" you mean the reasons for developing and using it, I think the reason has to do with detecting academic dishonesty (inappropriate authorial credit, plagiarism), and also detection of fake content.
It depends on what content and detection tool you are referring to. There are two main strategies that I know of for detection:
Watermarking of content
Content analysis
The techniques in each category will depend on whether you are referring to text, multimedia, video, audio or multi modal.
watermarking is fairly easy to understand because this has been done in traditional electronic content.
With respect to content analysis I will focus mainly on text which is one of the most asked topic (for example, arbitrarily choosing this bifurcation in topic is not easily done by LLMs). I would suggest that you read last paragraph of page 4 and beginning of page 5 in [1]. In the reference there is an explanation of differences in part of speech selection as well as stylistic differences between humans and LLMs. The approach for detecting these features is discussed in section 5 (jump to 5.2 for the statistical methods for content features, 5.3 for neural network methods and 5.4 human assisted).
References
[1] Wu, J., Yang, S., Zhan, R., Yuan, Y., Wong, D. F., & Chao, L. S. (2023). A survey on llm-gernerated text detection: Necessity, methods, and future directions. arXiv preprint arXiv:2310.14724.
One additional thing that I forgot in my post is that progress in this field resembles adversarial machine learning. Once the difference is known and highlighted as a deficiency, the developers will try to close the gap. This will make the recognition at every iteration more subtle and difficult.
AI detectors (also called AI writing detectors or AI content detectors) are tools designed to detect when a text was partially or entirely generated by artificial intelligence (AI) tools such as ChatGPT. These detectors use a blend of machine learning algorithms and pattern recognition to differentiate between content created by humans and that produced by AI.