zjunlp/Knowledge2Data
[TASLP 2025] Spatial Knowledge Graph-Guided Synthesis for Multimodal LLMs
This project helps researchers and developers working with multimodal AI by generating synthetic datasets. It takes descriptions of objects and their spatial relationships (a 'spatial knowledge graph') and produces corresponding textual data and images. The output is a new dataset of multimodal examples, useful for training or evaluating AI models that understand both language and vision.
Use this if you need to create diverse, custom multimodal datasets where the spatial relationships between objects are important, and existing datasets don't meet your specific needs.
Not ideal if you are looking for a pre-trained, production-ready AI model or a tool to simply annotate existing images, rather than generate new synthetic data.
Stars
8
Forks
—
Language
Python
License
MIT
Category
Last pushed
Nov 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/Knowledge2Data"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
RManLuo/graph-constrained-reasoning
Official Implementation of ICML 2025 Paper: "Graph-constrained Reasoning: Faithful Reasoning on...
zjukg/KoPA
[Paper][ACM MM 2024] Making Large Language Models Perform Better in Knowledge Graph Completion
damianoduranti/LLMknowextra
LLM-Driven Knowledge Extraction: Results in Temporal and Description Logics (EKAW 2024)
hhy-huang/GraphJudge
[EMNLP'25 main] This is the official repo for the paper, Can LLMs be Good Graph Judge for...
pat-jj/KG-FIT
[NeurIPS'24] Knowledge Graph Fine-Tuning using LLMs