InternScience/GraphGen
GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation
Constructs fine-grained knowledge graphs from source documents, then uses expected calibration error metrics to identify knowledge gaps in LLMs and prioritize high-value QA generation with multi-hop neighborhood sampling and style-controlled diversity. Integrates with Ray for distributed pipeline execution, supports multiple LLM backends (vLLM, HuggingFace Transformers, SGLang, Ollama), and provides generated data compatible with LLaMA-Factory and xtuner for downstream fine-tuning workflows.
978 stars. Actively maintained with 9 commits in the last 30 days.
Stars
978
Forks
79
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
9
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InternScience/GraphGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
timothepearce/synda
A CLI for generating synthetic data
rasinmuhammed/misata
High-performance open-source synthetic data engine. Uses LLMs for schema design and vectorized...
ziegler-ingo/CRAFT
[TACL, EMNLP 2025 Oral] Code, datasets, and checkpoints for the paper "CRAFT Your Dataset:...
ZhuLinsen/FastDatasets
A powerful tool for creating high-quality training datasets for Large Language Models...
BatsResearch/bonito
A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.