zhanghang1989/ResNeSt
ResNeSt: Split-Attention Networks
Split-attention modules partition channel groups and apply learned attention weights to aggregate multi-scale representations within ResNet bottleneck blocks. Available as PyTorch and MXNet/Gluon implementations with pretrained weights, it integrates directly with Detectron2, MMDetection, and semantic segmentation frameworks (PyTorch Encoding, GluonCV) for transfer learning on downstream tasks like object detection and panoptic segmentation.
3,264 stars and 11,896 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
3,264
Forks
495
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 09, 2022
Monthly downloads
11,896
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zhanghang1989/ResNeSt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
berniwal/swin-transformer-pytorch
Implementation of the Swin Transformer in PyTorch.
Jittor/jittor
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with...
ViTAE-Transformer/ViTPose
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose...
sniklaus/pytorch-pwc
a reimplementation of PWC-Net in PyTorch that matches the official Caffe version