itsliamdowd/Redact
Leverage the benefits of large language models without leaking sensitive information.
This tool helps businesses safely use AI language models with their confidential documents. You upload a PDF, and it automatically removes sensitive details like names and addresses, replacing them with generic placeholders. The redacted content can then be analyzed by an AI, and the original sensitive details are re-inserted into the AI's response before you see it. This is for anyone who needs to process documents containing private information using external AI services, like HR professionals or legal teams.
No commits in the last 6 months.
Use this if you need to extract insights or answer questions about sensitive PDF documents using large language models without risking data leaks.
Not ideal if you need to redact information from file types other than PDFs, or if you need to manually control which specific data points are redacted.
Stars
26
Forks
—
Language
HTML
License
—
Category
Last pushed
Jan 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/itsliamdowd/Redact"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless...
sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
QWED-AI/qwed-verification
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation...
rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information...