x-transformers and self-attention-cv

x-transformers
79
Verified
self-attention-cv
47
Emerging
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 5,808
Forks: 507
Downloads:
Commits (30d): 8
Language: Python
License: MIT
Stars: 1,215
Forks: 153
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About x-transformers

lucidrains/x-transformers

A concise but complete full-attention transformer with a set of promising experimental features from various papers

This project provides pre-built, flexible transformer models for various AI tasks. You can input text, images, or a combination to generate new text, classify images, or create image captions. It's designed for AI researchers and practitioners who want to experiment with advanced transformer architectures without building them from scratch.

natural-language-processing computer-vision multimodal-ai generative-ai machine-learning-research

About self-attention-cv

The-AI-Summer/self-attention-cv

Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.

This is a set of building blocks for computer vision engineers to implement advanced image processing. It helps in developing custom models that analyze visual data by providing ready-to-use self-attention mechanisms. Computer vision researchers and deep learning practitioners can use this to build and experiment with novel image classification and segmentation architectures.

image-classification image-segmentation computer-vision deep-learning-research neural-network-architecture

Scores updated daily from GitHub, PyPI, and npm data. How scores work