Open
Description
Detect hallucinations in LLM outputs
Details
- What is the problem you're trying to solve?
- detect hallucinations in LLM outputs
- What tasks or workflows would be enabled by having support for your
proposed feature in cleanlab?- selective classification, classification with abstention, and more in general a way to "trust" your LLM output
- Can you share code snippets or pseudocode describing uses of your feature?
- I don't have code, but there are a few GitHub repositories implementing hallucination detection models, such as https://github.com/oneal2000/MIND
- Can you share any datasets that can help us assess the usefulness of the
proposed feature?