Transformer-Implementations and vision-transformer-from-scratch

Maintenance 0/25
Adoption 8/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 69
Forks: 18
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 241
Forks: 41
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stale 6m No Dependents
Stale 6m No Package No Dependents

About Transformer-Implementations

UdbhavPrasad072300/Transformer-Implementations

Library - Vanilla, ViT, DeiT, BERT, GPT

This project offers pre-built transformer models like BERT, GPT, and Vision Transformers (ViT, DeiT) to help machine learning engineers and researchers quickly implement advanced AI capabilities. It takes raw data such as text for language tasks or image tensors for computer vision, and outputs trained models or predictions. The ideal user is a machine learning practitioner looking to apply state-of-the-art transformer architectures.

natural-language-processing computer-vision image-classification language-translation deep-learning-research

About vision-transformer-from-scratch

tintn/vision-transformer-from-scratch

A Simplified PyTorch Implementation of Vision Transformer (ViT)

This project provides a clear and straightforward example of how a Vision Transformer (ViT) model is constructed and trained for image classification. It takes image datasets as input and outputs a trained model capable of classifying images into predefined categories. This is ideal for machine learning researchers or students who want to understand the inner workings of ViT models.

deep-learning-education computer-vision-research image-classification-learning transformer-models academic-code-examples

Scores updated daily from GitHub, PyPI, and npm data. How scores work