IAAR-Shanghai/FastMem
Fast Memorization of Prompt Improves Context Awareness of Large Language Models (Findings of EMNLP 2024)
15
/ 100
Experimental
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
6 / 25
Maturity
9 / 25
Community
0 / 25
Stars
24
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/IAAR-Shanghai/FastMem"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
67
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
57
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
55
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase
54
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
54