You can use a software program that's designed to code qualitative data to double check your results. AI should be able to do that too.
https://guides.library.jhu.edu/QDAS
It would be an interesting study to see how close a researcher's codings compares with the computer's results. Also some people have an innate ability to see the relationships within data. My hunch is that's it tied in with good ability in Algebra.
After trying the "automatic coding" via ChatGPT feature in ATLAS.ti, I don't think AI is much use for coding. But this implies that you want to do inter-rater reliability, which is only of value when you have done the kind of content analysis where you want to count codes. Other wise, more interpretive versions of qualitative analysis accept the subjectivity of the researcher.
With regard to trustworthiness, one possibility would be member checks (Lincoln and Guba, 1985).
Member checking during the transcription stage can help. Let some of your participants read the transcripts and give you feedback. Also sharing your preliminary codes and initial themes with participants can enhance reliability and .