rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt injection through innovative attack methods and research insights.
Stars
1
Forks
1
Language
—
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rv427447/Cognitive-Hijacking-in-Long-Context-LLMs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TalEliyahu/Awesome-AI-Security
Curated resources, research, and tools for securing AI systems
The-Art-of-Hacking/h4cker
This repository is maintained by Omar Santos (@santosomar) and includes thousands of resources...
aw-junaid/Hacking-Tools
This Repository is a collection of different ethical hacking tools and malware's for penetration...
sigstore/model-transparency
Supply chain security for ML
jiep/offensive-ai-compilation
A curated list of useful resources that cover Offensive AI.