lyj20071013/Sparse-MoE-Language-Model-v1

This repository contains an implementation of a Sparse Mixture of Experts (MoE) Language Model using PyTorch. The model is designed to handle large-scale text generation tasks efficiently by leveraging multiple expert networks and a routing mechanism to dynamically select the most relevant experts for each input.

15
/ 100
Experimental

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 2 / 25
Maturity 1 / 25
Community 12 / 25

How are scores calculated?

Stars

2

Forks

1

Language

Python

License

Last pushed

Mar 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/lyj20071013/Sparse-MoE-Language-Model-v1"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.