guardrails and circle-guard-bench

The former project provides a framework for adding guardrails to LLMs, while the latter offers a benchmark for evaluating the effectiveness of such guard systems, making them complements.

guardrails
70
Verified
circle-guard-bench
43
Emerging
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 19/25
Maintenance 13/25
Adoption 8/25
Maturity 15/25
Community 7/25
Stars: 6,534
Forks: 543
Downloads:
Commits (30d): 62
Language: Python
License: Apache-2.0
Stars: 51
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About guardrails

guardrails-ai/guardrails

Adding guardrails to large language models.

This tool helps developers build reliable AI applications by ensuring the output from large language models (LLMs) is safe, compliant, and correctly formatted. It takes an LLM's raw output and applies predefined 'guards' or validation rules to it, flagging or correcting issues like toxic language, competitor mentions, or incorrect data formats. The end user is an AI developer or engineer responsible for integrating LLMs into applications and maintaining their quality and safety.

AI application development LLM output validation AI safety data structuring AI engineering

About circle-guard-bench

whitecircle-ai/circle-guard-bench

First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)

Scores updated daily from GitHub, PyPI, and npm data. How scores work