Transformer-Implementations and ViT_PyTorch

ViT_PyTorch
25
Experimental
Maintenance 0/25
Adoption 8/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 7/25
Maturity 8/25
Community 10/25
Stars: 69
Forks: 18
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 25
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Dependents
No License Stale 6m No Package No Dependents

About Transformer-Implementations

UdbhavPrasad072300/Transformer-Implementations

Library - Vanilla, ViT, DeiT, BERT, GPT

This project offers pre-built transformer models like BERT, GPT, and Vision Transformers (ViT, DeiT) to help machine learning engineers and researchers quickly implement advanced AI capabilities. It takes raw data such as text for language tasks or image tensors for computer vision, and outputs trained models or predictions. The ideal user is a machine learning practitioner looking to apply state-of-the-art transformer architectures.

natural-language-processing computer-vision image-classification language-translation deep-learning-research

About ViT_PyTorch

godofpdog/ViT_PyTorch

This is a simple PyTorch implementation of Vision Transformer (ViT) described in the paper "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale"

This project helps machine learning engineers and researchers quickly set up and train a Vision Transformer (ViT) model for image classification tasks. You input a dataset of images, and it outputs a trained model capable of categorizing new images. This is for professionals building advanced computer vision systems.

image-classification deep-learning computer-vision model-training vision-transformers

Scores updated daily from GitHub, PyPI, and npm data. How scores work