Transformer Categories
Transformer Architecture Tutorials
Educational implementations and hands-on learning resources covering transformer fundamentals, attention mechanisms, and core architecture components. Does NOT include domain-specific applications (math solving, embeddings, RL), research papers on transformer theory, or production-grade models.
267 models
Local LLM Deployment
Tools and resources for running, hosting, and serving open-source LLMs locally or on private infrastructure without cloud dependencies. Includes deployment platforms, free API gateways, optimization guides, and access control for self-hosted models. Does NOT include model training, fine-tuning frameworks, or cloud-based LLM services.
245 models
LoRA QLoRA Fine-tuning
Tools and frameworks for parameter-efficient fine-tuning of LLMs using LoRA, QLoRA, and related subspace tuning methods on consumer hardware. Does NOT include general fine-tuning without these specific techniques, model compression, or task-specific applications unless they primarily demonstrate these adaptation methods.
206 models
ML Foundations Curricula
Educational repositories, courses, and comprehensive learning materials covering foundational ML/DL concepts, theory, and hands-on implementations across multiple domains. Does NOT include specialized applications, production tools, or papers focused on a single narrow task.
183 models
Review Sentiment Classification
Fine-tuning transformers for sentiment analysis on customer reviews (product, airline, hotel, etc.) across different platforms and domains. Does NOT include aspect-based sentiment analysis, emotion detection, or sentiment analysis applied to non-review text (tweets, news, social media).
171 models
Interactive AI Chat UIs
Complete chat interfaces and applications for conversational AI interactions, with emphasis on user experience, multi-model comparison, and character/roleplay functionality. Does NOT include backend APIs without UI, inference engines, or specialized chatbots for specific domains (medical, legal, etc.).
170 models
LLM Inference Engines
Optimized inference engines and serving systems for deploying and running large language models efficiently. Focuses on throughput, latency, memory optimization, and production deployment. Does NOT include training frameworks, fine-tuning methods, quantization techniques, or model architecture implementations.
153 models
LLM Training Experimentation
Repositories for training, fine-tuning, and experimenting with large language models including tutorials, frameworks, and custom implementations. Does NOT include deployment tools, specific downstream applications (chatbots, summarization), or model evaluation/analysis.
151 models
GPT2 Pretraining Fine-tuning
Tools for pretraining, fine-tuning, and implementing GPT-2 models from scratch, including language-specific variants and inference optimization. Does NOT include downstream applications like question-answering or summarization, nor other model architectures beyond GPT-2 variants.
128 models
RLHF Alignment Training
Tools and frameworks for training language models using reinforcement learning from human feedback (RLHF), direct preference optimization (DPO), and related alignment techniques. Includes implementations of RLHF pipelines, preference learning methods, and safety-focused training approaches. Does NOT include general safety evaluation, jailbreak detection, or post-hoc alignment analysis without training components.
106 models
Text Summarization Transformers
Tools for condensing text documents into shorter summaries using transformer models (both extractive and abstractive methods). Does NOT include general document processing, translation, or tools primarily focused on other NLP tasks like question-answering or content detection.
100 models
Conversational Chatbot Applications
End-to-end chatbot systems for specific domains or use cases (customer service, education, scheduling, open-domain conversation). Does NOT include chatbot frameworks/platforms, individual model implementations, or general conversational AI components like intent detection libraries.
96 models
Multilingual LLM Adaptation
Tools for adapting and fine-tuning large language models for non-English languages and specific domains/dialects. Includes instruction-tuning, domain-specific pretraining, and language-specific model development. Does NOT include general LLM frameworks, English-only model implementations, or application-specific fine-tuning for tasks like sentiment analysis.
91 models
Multimodal Vision Language
89 models
Mathematical Reasoning Transformers
Tools for training transformers to solve mathematical and symbolic reasoning problems through techniques like pretraining, reinforcement learning, and neuro-symbolic methods. Does NOT include general question-answering, commonsense reasoning without mathematical focus, or pure symbolic solvers without neural components.
84 models
3D Vision Transformers
Tools for 3D computer vision tasks using transformers, including depth estimation, multi-view geometry, structure-from-motion, point cloud processing, 3D pose estimation, and novel view synthesis. Does NOT include general 2D vision tasks, 2D pose estimation, or 3D shape generation without vision inputs.
83 models
Transformer Frameworks Wrappers
High-level libraries and frameworks that simplify transformer model usage through abstraction layers, simplified APIs, and domain-specific implementations. Includes wrapper libraries, unified interfaces for multiple tasks, and framework integrations (Unity, Java, Go, MindSpore). Does NOT include task-specific applications (summarization, classification, QA are separate), deployment tools, or Android/mobile-specific implementations.
80 models
AI-Powered Business Analytics
Tools that apply LLMs to analyze structured data (CSV, databases, sales records) and generate actionable business insights, visualizations, and intelligence reports. Does NOT include general data visualization, code analysis tools, or document/research paper explanation systems.
80 models
Llm Fine Tuning
77 models
Messaging Platform Chatbots
AI chatbots deployed on messaging platforms (Telegram, Discord, Twitch, WhatsApp, QQ) and chat interfaces. Does NOT include general-purpose chatbot frameworks, retrieval systems, or chatbots without platform-specific integration.
75 models
LLM Quantization Methods
Tools and implementations for quantizing large language models using techniques like GPTQ, AWQ, and KV cache compression to reduce model size and inference costs. Does NOT include general model compression via pruning, distillation, or training optimization.
71 models
Text Classification Transformers
Fine-tuning and deploying transformer models (BERT, DistilBERT, RoBERTa, etc.) for document and text categorization tasks. Includes multi-class, multi-label, and domain-specific classification. Does NOT include sequence labeling (NER, POS tagging), semantic similarity, or ticket triage systems with additional components like sentiment analysis or routing logic.
71 models
BERT Model Implementations
PyTorch and framework-specific implementations of BERT and BERT-variant architectures (RoBERTa, DistilBERT, etc.), including pretraining, finetuning libraries, and language-specific BERT models. Does NOT include task-specific applications (NER, classification, QA), downstream finetuning notebooks, or non-BERT transformer implementations.
68 models
Multi-agent Orchestration
Frameworks and platforms for building, coordinating, and managing multiple AI agents working together with tool integration, task delegation, and inter-agent communication. Does NOT include single-agent chatbots, prompt engineering frameworks, or agent evaluation benchmarks.
66 models
HuggingFace Learning Resources
Educational materials, tutorials, courses, and guided explorations for learning Hugging Face libraries and Transformers fundamentals. Does NOT include domain-specific applications, model implementations, or deployment tools—only beginner-friendly learning content and crash courses.
65 models
Question Answering Systems
Tools for extractive and generative question answering using transformer models on text corpora. Does NOT include knowledge distillation, model optimization, or QA frameworks without transformer-based inference components.
64 models
Time Series Forecasting Transformers
Tools applying transformer architectures to time series forecasting tasks, including LSTF, energy/weather prediction, and temporal modeling. Does NOT include general time series analysis, non-transformer forecasting methods, or transformers applied to other domains.
61 models
LLM Terminal Automation
CLI tools and shell interfaces that leverage LLMs to automate command execution, shell scripting, and terminal workflows. Does NOT include chatbots, general LLM APIs, or non-terminal-focused applications.
61 models
NLP Learning Coursework
Academic repositories, course materials, and learning projects focused on foundational NLP concepts and assignments. Includes university courses, study guides, and educational implementations. Does NOT include production tools, pre-built model libraries, or specialized NLP applications (those belong in task-specific categories).
60 models
Transformer Interpretability Mechanistic
Tools for understanding transformer internals through visualization, attribution analysis, and mechanistic reverse-engineering of learned circuits and representations. Does NOT include general explainability frameworks, dataset analysis tools, or applications built on transformers.
57 models
Llm Scaling Architecture
56 models
Vision Language Models
Tools and implementations for multimodal AI models that combine vision and language processing for tasks like VQA, image captioning, and visual reasoning. Does NOT include general multimodal fusion, text-to-image generation, or single-modality models.
56 models
Medical Image Segmentation Transformers
Tools for segmenting anatomical structures and regions in medical images using transformer architectures (including hybrid CNN-transformer models). Does NOT include general medical image diagnosis, classification, or non-transformer segmentation methods.
53 models
Hate Speech Detection
Tools and models for identifying, classifying, and mitigating hate speech, offensive language, and toxic content in text. Does NOT include general sentiment analysis, stance detection, or content moderation for non-hateful policy violations.
52 models
Text to Image Generation
Tools for generating, manipulating, and editing images from text prompts using diffusion models and related generative techniques. Does NOT include general image classification, detection, or non-generative image processing tasks.
50 models
Llm Frameworks Libraries
50 models
Emotion Detection Transformers
Tools for detecting, classifying, and analyzing emotions in text using transformer models. Includes APIs, datasets, and models for multi-emotion recognition. Does NOT include general sentiment analysis, mental health applications, or conversational AI systems without emotion-specific focus.
50 models
Neural Machine Translation
Tools for translating text between languages using transformer-based neural models. Includes multilingual translators, code-switched translation, and language pair-specific translation systems. Does NOT include speech translation, document OCR, or general localization platforms.
49 models
Prompt Engineering Security
Tools for engineering, testing, executing, and securing prompts in LLM workflows—including batching systems, injection detection, jailbreak research, and multi-model prompt distribution. Does NOT include general prompt templates, LLM APIs, or non-security-focused prompt optimization.
49 models
Power Transformer Design
Tools for simulating, optimizing, and analyzing electrical transformers (magnetic cores, efficiency, fault detection, scaling). Does NOT include transformer neural network architectures, ML model optimization, or inference deployment.
48 models
Model Evaluation Diagnostics
Tools for systematically evaluating, diagnosing, and benchmarking transformer models across NLI, WSD, and other NLP tasks using standard test sets and evaluation frameworks. Does NOT include general model training, fine-tuning without evaluation focus, or language-specific model overviews.
48 models
LLM Implementation From Scratch
Educational repositories focused on building Large Language Models from first principles using PyTorch, emphasizing step-by-step understanding of transformer architecture, tokenization, and training mechanics. Does NOT include fine-tuning existing models, inference optimization, or production deployment frameworks.
44 models
OCR Document Extraction
Tools for extracting text and structured data from images, PDFs, and documents using transformer-based OCR models. Does NOT include general document analysis, LLM-based summarization, or post-extraction processing (summarization/Q&A).
44 models
Named Entity Recognition
Tools and datasets for identifying and classifying named entities (persons, organizations, locations, etc.) in text using transformer models. Does NOT include coreference resolution, general text classification, or entity linking/disambiguation.
44 models
Streamlit LLM Interfaces
Web applications built with Streamlit for interactive interfaces to LLMs and document querying. Includes chat frontends, PDF Q&A tools, and vision model interactions. Does NOT include backend LLM infrastructure, training frameworks, or non-Streamlit UI implementations.
44 models
Llm Implementation Tutorials
43 models
Browser-Based ML Inference
Tools and frameworks for running pretrained transformer models directly in browsers and client-side environments (JavaScript/TypeScript/WebAssembly) without server backends. Does NOT include server-side model deployment, model training, or non-transformer ML frameworks.
43 models
Transformer Training Optimization
Tools, frameworks, and techniques for accelerating transformer model training and inference through hardware-specific optimizations, parallelism strategies, and performance tuning. Does NOT include model compression/pruning, application-specific fine-tuning, or inference deployment platforms.
42 models
Vision Transformer Implementations
Reference implementations and educational repositories of Vision Transformer architectures across frameworks (TensorFlow, PyTorch, Keras). Includes core ViT models and variants for standard vision tasks. Does NOT include specialized vision-language models, 3D vision, medical imaging, or hybrid architectures that significantly depart from standard ViT design.
41 models
Fake News Detection
Tools for detecting and classifying false, misleading, or unverified news articles and claims using transformers and NLP models. Does NOT include general text classification, stance detection, or misinformation prevention frameworks without a news/claim verification focus.
40 models
Llm Reasoning Research
39 models
LLM Benchmark Leaderboards
Comprehensive evaluation frameworks, benchmarks, and leaderboards for comparing LLM performance across diverse tasks and domains. Includes standardized metrics, multi-model comparisons, and scoring systems. Does NOT include performance profiling tools, inference optimization, or model training frameworks.
39 models
Llm Learning Resources
38 models
Therapeutic Chatbot Applications
AI chatbots specifically designed for mental health support, emotional counseling, and therapeutic conversations with emotion detection capabilities. Does NOT include general-purpose chatbots, mental health text classification without conversation, or non-therapeutic conversational systems.
38 models
Math Reasoning Datasets
37 models
Multimodal Fusion Transformers
Tools for combining multiple input modalities (text, image, audio, video, tabular data) using transformer architectures to perform unified tasks. Does NOT include single-modality models, recommendation systems, or domain-specific applications like robotics/translation unless multimodal fusion is the primary focus.
37 models
Protein Transformers ML
Tools for applying transformer models to protein-related tasks including structure prediction, function prediction, binding/interaction analysis, and protein design. Does NOT include general protein analysis tools, genomic/DNA sequence analysis, or non-transformer-based bioinformatics methods.
36 models
Text to Speech TTS
Tools for converting written text into spoken audio using transformer models and neural vocoding. Includes TTS engines, voice synthesis systems, and voice cloning capabilities. Does NOT include speech recognition, speech-to-text, audio classification, or general audio processing without text input.
35 models
Llama Model Implementations
Educational and production implementations of the Llama model architecture from scratch in various frameworks (PyTorch, JAX, NumPy). Includes simplified variants, language-specific adaptations, and training/inference code. Does NOT include fine-tuning frameworks, detection tools, or applications built on top of Llama.
35 models
Vision Language Instruction Tuning
Tools for training and fine-tuning multimodal models that combine vision and language through instruction-based learning. Includes efficient architectures, video understanding, and grounded vision-language models. Does NOT include general vision transformers, image captioning without instruction tuning, or non-multimodal LLM fine-tuning.
34 models
ViT Image Classification
Tools and implementations for training Vision Transformers on image classification tasks across various datasets (MNIST, CIFAR-10, custom domains). Includes from-scratch implementations, fine-tuning tutorials, and comparative studies. Does NOT include vision-language models, object detection, medical imaging, 3D vision, or other downstream vision tasks beyond classification.
34 models
Financial Return Prediction
Tools for predicting stock returns, asset prices, and portfolio performance using transformer and deep learning models. Includes factor modeling, cross-sectional forecasting, and portfolio optimization. Does NOT include general time-series forecasting, sentiment analysis, or trading execution systems.
34 models
AI-Powered SaaS Startups
Early-stage SaaS applications and hackathon projects that use AI/LLMs as core features to solve specific problems (learning, growth, finance, productivity). Does NOT include foundational ML models, infrastructure tools, or general-purpose AI frameworks.
34 models
ML API Deployment
Tools and frameworks for deploying transformer models as production-ready APIs using FastAPI, Flask, or similar web services with containerization and inference optimization. Does NOT include model training, fine-tuning frameworks, or non-API deployment methods like static model serving.
34 models
Music Generation Transformers
Tools for generating musical audio, MIDI, or symbolic music using transformer models and LLMs from text prompts or conditional inputs. Does NOT include lyrics-only generation, music translation, or rhythm game mapping without audio synthesis.
33 models
Korean Language Models
Pretrained transformer models specifically designed for Korean language processing, including BERT, ELECTRA, and specialized variants. Does NOT include general multilingual models, non-Korean language models, or downstream task-specific applications (unless they primarily showcase the Korean model architecture itself).
33 models
Medical Image Diagnosis Transformers
Tools for diagnosing medical conditions from medical imaging data using transformer-based models (ViT, Swin, DINOv2, etc.). Includes skin lesions, tumors, retinopathy, and cancer detection. Does NOT include general medical text analysis, non-transformer medical imaging, or non-diagnostic medical applications.
32 models
Transformer Architecture Education
31 models
Resume Job Matching
Tools for matching resumes against job descriptions using AI/transformers to score compatibility, extract skills, identify gaps, and recommend relevant opportunities. Does NOT include general resume writing, interview preparation systems, or fraud detection in job postings.
31 models
Semantic Textual Similarity
Tools for measuring and comparing semantic similarity between text passages using transformer embeddings and contextual analysis. Does NOT include general embedding extraction, text classification, or semantic communication without explicit similarity scoring.
31 models
Llm Finetuning Frameworks
30 models
Academic Thesis Repositories
Collections of code, datasets, and implementations from academic theses and dissertations using transformers. Includes master's theses, PhD theses, and bachelor theses across various domains. Does NOT include published research papers, standalone production tools, or tutorials.
30 models
Multi-provider LLM Interfaces
CLI, web, and desktop applications that provide unified interfaces to interact with multiple LLM providers (OpenAI, Claude, local models, etc.) through a single tool. Does NOT include specialized agent frameworks, RAG systems, or single-provider wrappers.
29 models
Llm Interpretability Explainability
29 models
Retrieval Augmented Generation
Tools for building RAG systems that combine vector search, document retrieval, and LLMs to answer questions over custom data sources. Does NOT include general question-answering systems, semantic search libraries, or document extraction without the generation component.
29 models
Llm Compression Optimization
28 models
Llm Knowledge Distillation
28 models
Image Captioning Transformers
Tools for generating textual descriptions from images and videos using transformer-based encoder-decoder architectures. Includes image-to-text, video captioning, and dense captioning systems. Does NOT include general vision-language models for other tasks (VQA, retrieval), text-to-image generation, or vision-only feature extraction.
28 models
Domain Specific Benchmarks
27 models
Diffusion Language Models
27 models
Text Clustering Topic Modeling
Tools for unsupervised discovery and organization of text documents through clustering, dimensionality reduction, and topic extraction using transformer embeddings. Does NOT include supervised text classification, document retrieval/search, or general semantic similarity tasks.
26 models
BLIP Image Captioning
End-to-end image captioning systems using BLIP models, including web interfaces, fine-tuning, batch processing, and caption generation. Does NOT include general vision-language models, CLIP embeddings, or non-captioning vision tasks like classification or object detection.
25 models
T5 mT5 Fine-tuning
Tools and frameworks for training, fine-tuning, and adapting T5 and mT5 transformer models for specific tasks (paraphrasing, simplification, language identification, domain-specific applications). Does NOT include general question-answering systems, content detection, or applications that use pre-trained models without fine-tuning code.
24 models
Graph Transformers
Tools and architectures that combine transformer mechanisms with graph neural networks for structured data. Includes graph attention, graph tokenization, and transformer-based models for molecular, biological, and relational graph data. Does NOT include general transformers, standard GNNs without transformer components, or domain-specific applications using transformers on non-graph data.
24 models
Creative Text Generation
Tools for generating creative written content (poetry, fiction, stylized text) using transformers. Does NOT include general-purpose text generation, data-to-text, or structured decoding applications.
24 models
CLIP Image Embeddings
Tools for generating and working with CLIP image-text embeddings, including implementations, fine-tuning, and lightweight variants. Does NOT include general vision-language models, text-to-image generation, or multimodal fusion frameworks.
23 models
Instruction Tuning Datasets
23 models
Llm Knowledge Editing
21 models
Whisper Speech Transcription
Tools and applications for automatic speech recognition (ASR) and audio transcription using Whisper models. Includes implementations with various interfaces (API, GUI, web), fine-tuning for specific languages/accents, and integration with other AI systems. Does NOT include text-to-speech, voice cloning, audio classification without transcription, or general speech processing unrelated to transcription.
21 models
Vision Transformer Classification
Tools and models for image classification using transformer architectures (Vision Transformers, SigLIP, BEiT, etc.). Does NOT include general image captioning, vision-language retrieval, or multi-label classification frameworks without transformer-based implementations.
21 models
Audio Classification Transformers
Tools for classifying, detecting, and identifying audio events, speech, and sound types using transformer models. Includes speaker identification, sound event detection, environmental sound classification, and biomedical audio analysis. Does NOT include music generation, speech synthesis, or general audio processing without classification objectives.
21 models
Semantic Search Retrieval
Tools for building search and retrieval systems using transformer-based semantic matching, dense embeddings, and ranking models. Includes hybrid retrieval (dense + keyword), reranking, and domain-specific search applications. Does NOT include general question-answering systems, recommendation engines, or search UI/autocomplete without semantic matching focus.
21 models
Tokenizer Libraries
Libraries and implementations for tokenization across programming languages and frameworks. Includes tokenizer training, conversion, alignment, and optimization. Does NOT include higher-level NLP tasks, token classification, or downstream language model applications.
20 models
Sparse Attention Optimization
20 models
Evaluation Frameworks Metrics
19 models
Molecular Generation Transformers
Tools for generating novel molecules and chemical compounds using transformer-based language models (SMILES, SELFIES, molecular representations). Includes fine-tuning, property prediction, and reinforcement learning for drug discovery. Does NOT include general protein folding, retrosynthesis planning, or non-transformer molecular modeling approaches.
19 models
Mixture Of Experts Llms
19 models
Study Aid Generators
AI tools that transform educational content (notes, documents, study materials) into structured learning aids like flashcards, quizzes, and summaries. Does NOT include full learning management systems, course platforms, or general document processing tools.
19 models
Object Detection Transformers
Tools and implementations of transformer-based object detection models (DETR variants and extensions) for localizing and classifying objects in images. Does NOT include pose estimation, human-object interaction detection, or general vision transformers for classification tasks.
19 models
Financial Sentiment Analysis
Tools for analyzing sentiment in financial texts (news, earnings calls, disclosures) to extract market insights and investment signals. Does NOT include general sentiment analysis, stock price prediction models without sentiment components, or trading bots that don't focus on sentiment extraction.
19 models
Essay Scoring Grading
Tools for automatically scoring, grading, and evaluating student essays and written assignments using transformers and NLP. Includes rubric generation and proficiency assessment. Does NOT include general text classification, data augmentation libraries without essay-grading context, or answer evaluation systems for non-essay formats.
19 models
Parameter Efficient Adapters
Tools and libraries for implementing adapter modules, LoRA, and other parameter-efficient transfer learning methods for transformers. Includes adapter frameworks, modular fine-tuning approaches, and techniques to reduce trainable parameters. Does NOT include full model fine-tuning, general compression/pruning methods, or domain-specific applications without adapter focus.
18 models
Llm Inference Serving
18 models
Multimodal Vision Language Models
18 models
AI Content Detection
Tools for detecting AI-generated content (text, images, video) and identifying synthetic/manipulated media. Does NOT include content generation, deepfake creation, or code analysis unrelated to AI-generation detection.
18 models
Recommendation Systems Transformers
Tools for building personalized recommendation engines using transformer models, embeddings, and neural architectures across domains (e-commerce, content, food, books, etc.). Does NOT include general ranking/search systems, ranking algorithms without personalization, or information extraction tools.
18 models
LLM Pruning Compression
Tools and methods for reducing the size and computational cost of large language models through structural pruning, layer removal, and parameter elimination. Does NOT include quantization, distillation-only approaches, or general model optimization techniques.
17 models
Llm Research Curation
17 models
Bias Detection Transformers
Tools for detecting, measuring, and mitigating biases (social, textual, visual, linguistic) in transformer models and NLP systems. Does NOT include general model interpretability, fairness frameworks without bias focus, or robustness testing unrelated to bias.
17 models
Llm Domain Datasets
16 models
Machine Translation Transformers
16 models
Llm Cuda Optimization
15 models
Disaster Tweet Classification
Tools for classifying tweets and social media posts to identify disaster-related content, emergency situations, and crisis events using transformer models. Does NOT include general text classification, fake news detection, or non-emergency sentiment analysis.
15 models
Cybersecurity Threat Detection
Tools and models for detecting security threats including malware, vulnerabilities, intrusions, and network attacks using transformer-based NLP and ML approaches. Does NOT include general cybersecurity frameworks, vulnerability databases, or non-ML security tools.
15 models
Graph Language Models
14 models
Llm Quantization Techniques
13 models
Attention Mechanism Implementations
13 models
Wav2Vec2 Speech Recognition
Fine-tuning and deployment of Wav2Vec2 models for automatic speech recognition (ASR) tasks, including multilingual and language-specific implementations. Does NOT include general speech-to-text pipelines, voice translation systems, or audio classification without ASR components.
13 models
Speculative Decoding Algorithms
12 models
Llm Hallucination Mitigation
12 models
PHP AI SDKs
PHP libraries and frameworks for integrating AI models and APIs (LLMs, transformers, hosted services) into PHP applications. Includes SDKs for specific AI providers, general AI platforms, and PHP-native ML toolkits. Does NOT include language-agnostic ML research, non-PHP frameworks, or task-specific transformers (e.g., music generation, PII redaction).
12 models
Direct Preference Optimization
12 models
PII Redaction Anonymization
Tools for detecting, masking, and removing personally identifiable information (PII) from text, images, audio, and structured data to ensure privacy compliance. Does NOT include general data encryption, access control, or non-PII data sanitization.
11 models
Llm Framework Abstractions
11 models
Code Completion Copilots
IDE extensions and local LLM-powered tools that provide real-time code suggestions, inline completions, and AI-assisted coding within editors. Does NOT include general code generation, debugging tools, or non-editor-based development assistants.
11 models
Llm Docker Deployments
11 models
Indic Language Translation
Tools for translating between Indian languages (Hindi, Telugu, Angika, etc.) and English, or between Indic languages themselves, with emphasis on low-resource and code-mixed language support. Does NOT include general machine translation, non-Indic language pairs, or transliteration-only solutions.
11 models
Llm Recommendation Systems
11 models
Clinical Text Classification
Tools for classifying clinical text, medical records, and healthcare documents using transformer models to predict diseases, conditions, adverse events, and medical codes. Does NOT include general medical Q&A, drug interaction databases, or non-classification clinical NLP tasks.
11 models
Gpt Model Fine Tuning
10 models
Spam Detection Transformers
Fine-tuned transformer models for detecting spam in text messages, emails, and comments across various platforms. Does NOT include general text classification, content moderation beyond spam, or non-transformer-based spam filtering approaches.
10 models
Llm Robot Planning
9 models
YouTube Video Summarization
Tools for extracting, transcribing, and summarizing YouTube video content using NLP and LLMs. Includes transcript extraction, multi-language support, and Q&A over video content. Does NOT include general text summarization, music generation, or sponsorship detection unrelated to summarization.
9 models
Apple Silicon Llm Inference
8 models
Code Model Training
8 models
Kv Cache Optimization
7 models
Llm Bias Evaluation
7 models
Mistral Ai Tools
6 models
Safety Robustness Evaluation
6 models
Gpt Multilingual Training
6 models
Clinical Llm Tools
6 models
Chain Of Thought Reasoning
6 models
Llm Evaluation Benchmarking
6 models
Vision Transformer Optimization
6 models
Llm Knowledge Graph Generation
6 models
Jailbreak Attacks Analysis
6 models
Llm Orchestration Platforms
5 models
Mixup Augmentation Frameworks
5 models
Bert Model Frameworks
5 models
Compositional Reasoning Embeddings
5 models
Gpt Implementation Tutorials
4 models
Rust Llm Infrastructure
4 models
Llm Function Calling
4 models
Text Classification
4 models
Ai Music Generation
4 models
Llm Data Labeling
3 models
Multimodal Rag Systems
3 models
Prompt Engineering Techniques
3 models
Llm Serialization Formats
3 models
Ml Inference Benchmarking
3 models
State Space Model Architectures
3 models
Nlp Learning Resources
3 models
Llm Translation Tools
3 models
Competitive Agent Games
3 models
Julia Ml Frameworks
3 models
Llm Agent Training Gyms
3 models
Chatgpt Api Tutorials
3 models
Protein Design Llms
3 models
Clip Vision Language
3 models
Llm Fine Tuning Optimization
3 models
Multimodal Visual Grounding
3 models
Synthetic Data Generation
3 models
Structured Output Enforcement
3 models
Ai Generated Text Detection
3 models
Explainability Interpretability Frameworks
2 models
Llm Fine Tuning Frameworks
2 models
Nlp Fundamentals Tutorials
2 models
Distributed Training Frameworks
2 models
Text Tokenization Libraries
2 models
Graph Neural Networks
2 models
Gpt2 Language Models
2 models
Text Summarization Tools
2 models
End To End Asr Frameworks
2 models
Semantic Segmentation Techniques
2 models
Protein Language Models
2 models
Llm Chat Interfaces
2 models
Agent Memory Systems
2 models
Transformer Implementation Education
2 models
Image Caption Generation
2 models
Neural Data Compression
2 models
Rust Agent Frameworks
2 models
Langchain Integration Patterns
2 models
Ollama Chat Interfaces
2 models
Ai Stock Analysis
2 models
Defect Detection Quality Forensics
2 models
Generative Ai Learning
2 models
Variational Autoencoders Nlp
2 models
Llm Chatbot Interfaces
2 models
Vulnerability Detection Llm
2 models
Llm Thesis Research
2 models
Jax Ml Frameworks
2 models
Trajectory Prediction Ml
2 models
Ml Benchmarking Frameworks
2 models
Peptide Property Prediction
2 models
Knowledge Distillation Compression
2 models
Model Fine Tuning Methods
2 models
Task Oriented Dialogue Systems
2 models
Llm Request Routing
2 models
Llm Pentest Automation
2 models
Image Captioning Tools
2 models
Hybrid Retrieval Optimization
2 models
Video Editing Diffusion
1 models
Uncategorized
1 models
Financial Ai Agents
1 models
Content Based Recommendation
1 models
Ai Image Generation Platforms
1 models
Computer Vision Learning
1 models
Lightweight Training Utilities
1 models
Llm Orchestration Routing
1 models
Loss Function Implementations
1 models
Chatglm Fine Tuning
1 models
Machine Translation Systems
1 models
Agent Memory Infrastructure
1 models
Speech Ai Coursework
1 models
Chatbot Nlp Frameworks
1 models
Time Series Forecasting
1 models
Energy Sector Forecasting
1 models
Local Voice Assistants
1 models
Character Motion Animation
1 models
Session Context Memory
1 models
Ai Powered Search Engines
1 models
Ai Presentation Generation
1 models
Feature Selection Frameworks
1 models
Ios Nlp Frameworks
1 models
Sign Language Recognition
1 models
Compositional T2I Generation
1 models
Legal Document Analysis
1 models
Lottery Number Prediction
1 models
Kaggle Competition Solutions
1 models
Speaker Diarization Embedding
1 models
Text Translation Tools
1 models
Generative Ai Learning Projects
1 models
Rna Structure Learning
1 models
Nlp Education Courses
1 models
Lora Training Tools
1 models
Mcp Demo Examples
1 models
Causal Inference Nlp
1 models
Self Supervised Learning
1 models
Black Box Optimization
1 models
Healthcare Ai Diagnostics
1 models
Ai Video Generation
1 models
Multi Agent Debate Systems
1 models
Generative Ai Platforms
1 models
Qwen Llm Ecosystem
1 models
Llm Provider Sdks
1 models
Ollama Go Clients
1 models
Go Ml Bindings
1 models
Medical Image Segmentation
1 models
Game Playing Agents
1 models
Llm Evaluation Frameworks
1 models
Adversarial Nlp Robustness
1 models
World Models Frameworks
1 models
Paper Implementation Collections
1 models
Fact Checking Systems
1 models
Memory Augmented Architectures
1 models
Pdf Qa Systems
1 models
Spiking Neural Networks
1 models
Speech Synthesis Diffusion
1 models
Image Generation Mcp
1 models
Hugging Face Tutorials
1 models
Music Similarity Embeddings
1 models
Domain Adaptation Frameworks
1 models
Nano Gpt Variants
1 models
Model Compression Optimization
1 models
Text To Speech Frameworks
1 models
Text To Sql Rag
1 models
Kubernetes Llm Serving
1 models
Multimodal Search Engines
1 models
Prompt Engineering Optimization
1 models
Local Rag Frameworks
1 models
Rag Qa Systems
1 models
Hate Speech Content Moderation
1 models
Ml Project Portfolios
1 models
Membership Inference Attacks
1 models
Ml Project Collections
1 models
Rust Onnx Runtime
1 models
Variational Autoencoder Implementations
1 models
Llm Experimentation Labs
1 models
Javascript Ml Libraries
1 models
Diffusion Web Interfaces
1 models
Edge Device Ml Frameworks
1 models
Reading Comprehension Qa
1 models
Keyword Speech Recognition
1 models
Stable Diffusion Tools
1 models
Advanced Summarization Methods
1 models
Mental Health Chatbots
1 models
Youtube Transcript Summarization
1 models