zai-org/VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型

46
/ 100
Emerging

Built on ChatGLM-6B with BLIP2-Qformer bridging visual and language representations, it aligns 30M Chinese and 300M English image-text pairs during pretraining. Supports efficient parameter tuning through LoRA, QLoRA, and P-tuning via the SwissArmyTransformer framework, enabling deployment on consumer GPUs with INT4 quantization requiring as little as 6.3GB VRAM.

4,169 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

4,169

Forks

425

Language

Python

License

Apache-2.0

Last pushed

Aug 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zai-org/VisualGLM-6B"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.