guardrails-ai/guardrails
Adding guardrails to large language models.
This tool helps developers build reliable AI applications by ensuring the output from large language models (LLMs) is safe, compliant, and correctly formatted. It takes an LLM's raw output and applies predefined 'guards' or validation rules to it, flagging or correcting issues like toxic language, competitor mentions, or incorrect data formats. The end user is an AI developer or engineer responsible for integrating LLMs into applications and maintaining their quality and safety.
6,534 stars. Actively maintained with 62 commits in the last 30 days.
Use this if you are building an application with a large language model and need to guarantee its outputs are structured correctly and free from specific risks like toxicity or unwanted information.
Not ideal if you are looking for a general-purpose data validation tool not specifically designed for large language model outputs or if you need to fine-tune an LLM model itself.
Stars
6,534
Forks
543
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
62
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/guardrails-ai/guardrails"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
ethz-spylab/agentdojo
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
JasonLovesDoggo/caddy-defender
Caddy module to block or manipulate requests originating from AIs or cloud services trying to...
AmenRa/GuardBench
A Python library for guardrail models evaluation.
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language...
inkdust2021/VibeGuard
Uses just 1% memory while protecting 99% of your personal privacy.