zjunlp/Knowledge2Data

[TASLP 2025] Spatial Knowledge Graph-Guided Synthesis for Multimodal LLMs

26
/ 100
Experimental

This project helps researchers and developers working with multimodal AI by generating synthetic datasets. It takes descriptions of objects and their spatial relationships (a 'spatial knowledge graph') and produces corresponding textual data and images. The output is a new dataset of multimodal examples, useful for training or evaluating AI models that understand both language and vision.

Use this if you need to create diverse, custom multimodal datasets where the spatial relationships between objects are important, and existing datasets don't meet your specific needs.

Not ideal if you are looking for a pre-trained, production-ready AI model or a tool to simply annotate existing images, rather than generate new synthetic data.

multimodal-AI-development synthetic-data-generation knowledge-graph-engineering computer-vision-research natural-language-processing
No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

MIT

Last pushed

Nov 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/Knowledge2Data"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.