balavenkatesh3322/guardrails-demo
LLM Security Project with Llama Guard
This project helps you test the safety of Large Language Models (LLMs) by checking if user inputs or AI responses contain harmful or risky content. You can input text prompts and get back a safety assessment, or integrate it with a Llama 2 model to see real-time safety checks. It's designed for AI developers and researchers who are building or evaluating LLM applications.
No commits in the last 6 months.
Use this if you are an AI developer or researcher who needs to quickly set up and test a defensive framework to prevent your LLM applications from generating unsafe content or being exploited by risky prompts.
Not ideal if you are an end-user simply looking to use an existing safe LLM application, rather than build or test one.
Stars
10
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/balavenkatesh3322/guardrails-demo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
openvinotoolkit/model_server
A scalable inference server for models optimized with OpenVINO™
madroidmaq/mlx-omni-server
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically...
NVIDIA-NeMo/Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based...
generative-computing/mellea
Mellea is a library for writing generative programs.
rhesis-ai/rhesis
Open-source platform & SDK for testing LLM and agentic apps. Define expected behavior, generate...