I am currently reviewing different methods to detect hallucination detection in generative AI models with a specific focus on large language models. If anyone came across suitable methods that are not published yet, or has interesting thoughts, I am very happy to receive them here.

More Björn Preuss's questions See All
Similar questions and discussions