ContextualAI/gritlm
Generative Representational Instruction Tuning
Unifies text embedding and generation in a single model through instruction-based task routing, eliminating the need for separate retrieval and generation models. Built on instruction-tuned language models with pooling mechanisms for dense representations, it supports both zero-shot and instruction-conditioned inference across retrieval-augmented generation, semantic search, and language generation tasks. Available in multiple scales (7B to 8x7B), the framework integrates with Hugging Face's ecosystem and provides caching optimizations for efficient batch processing.
688 stars and 12,353 monthly downloads. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Stars
688
Forks
49
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 25, 2025
Monthly downloads
12,353
Commits (30d)
0
Dependencies
5
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/ContextualAI/gritlm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
xlang-ai/instructor-embedding
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
liuqidong07/LLMEmb
[AAAI'25 Oral] The official implementation code of LLMEmb
ritesh-modi/embedding-hallucinations
This repo shows how foundational model hallucinates and how we can fix such hallucinations using...
hpcaitech/CachedEmbedding
A memory efficient DLRM training solution using ColossalAI
ritesh-modi/fine-tuning-embeddings-template
This repo is a template to fine-tune embedding models using sentencetransformers based on...