rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information (PII) in logs — designed to support data privacy and GDPR compliance
Leverages the `gemma:3b` model via Ollama to detect PII through semantic understanding rather than pattern matching, handling obfuscated and multilingual data across 30+ PII categories including GDPR Article 9 sensitive data. The full stack—PostgreSQL, Elasticsearch, RabbitMQ, and a REST API—processes logs asynchronously with a web dashboard for visualization, deployable locally via Docker Compose.
Stars
97
Forks
9
Language
TypeScript
License
—
Category
Last pushed
Nov 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rpgeeganage/pII-guard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless...
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
QWED-AI/qwed-verification
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation...
subodhkc/llmverify-npm
AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency,...