Awesome-LLM4Security and Awesome-LLM-Red-Teaming

Awesome-LLM4Security
50
Established
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 18/25
Maintenance 2/25
Adoption 9/25
Maturity 16/25
Community 15/25
Stars: 301
Forks: 41
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars: 83
Forks: 12
Downloads:
Commits (30d): 0
Language:
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About Awesome-LLM4Security

liu673/Awesome-LLM4Security

This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.

This is a curated collection of resources for cybersecurity professionals, researchers, and enthusiasts interested in using large language models (LLMs) for security tasks. It brings together information on projects, academic papers, datasets, and related products. The goal is to provide a comprehensive reference for understanding and applying the latest advancements in AI-driven cybersecurity.

cybersecurity-research threat-intelligence vulnerability-analysis penetration-testing security-operations

About Awesome-LLM-Red-Teaming

user1342/Awesome-LLM-Red-Teaming

A curated list of awesome LLM Red Teaming training, resources, and tools.

This resource helps security researchers, AI developers, and auditors identify and exploit vulnerabilities in large language models (LLMs). It provides a curated collection of tools, guides, and research for conducting red-teaming exercises. You'll find resources ranging from practice environments for prompt injection to advanced frameworks for automated adversarial testing, allowing you to expose weaknesses in LLM security and alignment.

AI security red teaming vulnerability research LLM auditing prompt engineering

Scores updated daily from GitHub, PyPI, and npm data. How scores work