EXAONE-3.5 and EXAONE-Deep
These are ecosystem siblings, representing different versions or variants of the EXAONE model from LG AI Research, with EXAONE-Deep likely being a more advanced or specialized iteration given its higher star count.
About EXAONE-3.5
LG-AI-EXAONE/EXAONE-3.5
Official repository for EXAONE 3.5 built by LG AI Research
About EXAONE-Deep
LG-AI-EXAONE/EXAONE-Deep
Official repository for EXAONE Deep built by LG AI Research
Reasoning-enhanced language models spanning 2.4B to 32B parameters, optimized for math, coding, and complex problem-solving tasks through specialized training. Supports multiple deployment frameworks including TensorRT-LLM, vLLM, and SGLang, with quantized variants in AWQ and GGUF formats for efficient inference. Integrates seamlessly with Hugging Face Transformers and local inference tools like llama.cpp and Ollama.
Scores updated daily from GitHub, PyPI, and npm data. How scores work