Devanik21/HAG-MoE
HAG-MoE introduces a revolutionary approach to artificial intelligence by combining the power of Transformer attention mechanisms with hierarchical Mixture of Experts architecture
No commits in the last 6 months.
Stars
1
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Devanik21/HAG-MoE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InternLM/xtuner
A Next-Generation Training Engine Built for Ultra-Large MoE Models
SuperBruceJia/Awesome-Mixture-of-Experts
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of...
AmanPriyanshu/GPT-OSS-MoE-ExpertFingerprinting
ExpertFingerprinting: Behavioral Pattern Analysis and Specialization Mapping of Experts in...
arm-education/Advanced-AI-Mixture-of-Experts
Hands-on course materials for ML engineers to implement and optimize Mixture of Experts models:...
rioyokotalab/optimal-sparsity
[ICLR 2026 Oral] Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks