Transformer-Implementations and vision-transformer-from-scratch
About Transformer-Implementations
UdbhavPrasad072300/Transformer-Implementations
Library - Vanilla, ViT, DeiT, BERT, GPT
This project offers pre-built transformer models like BERT, GPT, and Vision Transformers (ViT, DeiT) to help machine learning engineers and researchers quickly implement advanced AI capabilities. It takes raw data such as text for language tasks or image tensors for computer vision, and outputs trained models or predictions. The ideal user is a machine learning practitioner looking to apply state-of-the-art transformer architectures.
About vision-transformer-from-scratch
tintn/vision-transformer-from-scratch
A Simplified PyTorch Implementation of Vision Transformer (ViT)
This project provides a clear and straightforward example of how a Vision Transformer (ViT) model is constructed and trained for image classification. It takes image datasets as input and outputs a trained model capable of classifying images into predefined categories. This is ideal for machine learning researchers or students who want to understand the inner workings of ViT models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work