ChatLLM and ChatGLM2-6B

ChatLLM is a unified wrapper/abstraction layer that provides OpenAI-compatible interfaces for multiple LLMs including ChatGLM2-6B, making it a complement that simplifies integration of the underlying model rather than a competitor.

ChatLLM
60
Established
ChatGLM2-6B
47
Emerging
Maintenance 0/25
Adoption 17/25
Maturity 25/25
Community 18/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 449
Forks: 58
Downloads: 950
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 15,645
Forks: 1,820
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m
Stale 6m No Package No Dependents

About ChatLLM

yuanjie-ai/ChatLLM

轻松玩转LLM兼容openai&langchain,支持文心一言、讯飞星火、腾讯混元、智谱ChatGLM等

About ChatGLM2-6B

zai-org/ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型

Based on the README, here's a technical summary: Built on the GLM base architecture with Multi-Query Attention for efficient inference, ChatGLM2-6B expands context length to 32K tokens (8K in conversation) using FlashAttention, achieving 42% faster inference and reducing INT4 quantization memory from 6GB to support 8K token conversations. Trained on 1.4T bilingual tokens with hybrid objectives and human preference alignment, it integrates seamlessly with HuggingFace's transformers library and supports INT4/INT8 quantization for deployment on resource-constrained hardware.

Scores updated daily from GitHub, PyPI, and npm data. How scores work