Evaluation patterns, release gates, and anti-hallucination techniques for developer-focused AI workflows.
-
Updated
Mar 27, 2026 - Python
Evaluation patterns, release gates, and anti-hallucination techniques for developer-focused AI workflows.
RAG evaluation workbench for retrieval recall, citation coverage, groundedness checks, and failure analysis
Retrieval-Augmented Generation (RAG) research application with groundedness evaluation to increase reliability and confidence in LLM generated output (post-hoc hallucination detection).
Add a description, image, and links to the groundedness topic page so that developers can more easily learn about it.
To associate your repository with the groundedness topic, visit your repo's landing page and select "manage topics."