verifywise-ai/verifywise-eval-action
GitHub Action & Python SDK to evaluate LLMs in CI/CD — gate PRs on correctness, faithfulness, hallucination, and more. Powered by VerifyWise.
Stars
2
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/verifywise-ai/verifywise-eval-action"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
FastBuilderAI/memory
FastMemory is a topological representation of text data using concepts as the primary input. It...
syncreus/syncreus-eval
Evaluate your LLM apps with one function call. Hallucination detection, RAG scoring, and agent...
bevinkatti/rag-harness
⚡ CLI to Evaluate and Compare RAG systems with RAGAS-style scoring
masaakisakamoto/memory-os
Deterministic continuity for AI systems. Detect and repair inconsistencies across sessions — not...
CjTruHeart/abundance-codex
Evidence-anchored narrative dataset that shifts AI reasoning from scarcity-default to...