Skip to content

Hallucination detection #1202

Open
Open
@AndreaPi

Description

Detect hallucinations in LLM outputs

Details

  • What is the problem you're trying to solve?
    • detect hallucinations in LLM outputs
  • What tasks or workflows would be enabled by having support for your
    proposed feature in cleanlab?
    • selective classification, classification with abstention, and more in general a way to "trust" your LLM output
  • Can you share code snippets or pseudocode describing uses of your feature?
  • Can you share any datasets that can help us assess the usefulness of the
    proposed feature?

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions