Ampli-xD/SHIELD
This is "SHIELD" a System for Harmful explicit-content Identification and Evaluation through LLM-Driven approach. Its primary objective is to score explicit content for 15 different categories of explicitness from 0 to 100, it leverages LLMs as the primary scoring tool.
No commits in the last 6 months.
Stars
1
Forks
—
Language
Python
License
—
Category
Last pushed
Jun 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/Ampli-xD/SHIELD"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Azure-Samples/azure-ai-document-processing-samples
A collection of samples demonstrating techniques for processing documents with Azure AI...
artitw/text2text
Text2Text Language Modeling Toolkit
aiplanethub/beyondllm
Build, evaluate and observe LLM apps
build-on-aws/langchain-embeddings
This repository demonstrates the construction of a state-of-the-art multimodal search engine,...
qianniuspace/llm_notebooks
AI 应用示例合集