shikhartuli/cnn_txf_bias
[CogSci'21] Study of human inductive biases in CNNs and Transformers.
Compares CNNs and Vision Transformers against human visual perception using error consistency analysis on augmented ImageNet and shape/texture bias tests via Stylized ImageNet. Implements fine-tuned models in TensorFlow 2.4 to evaluate whether self-attention mechanisms produce more human-aligned classification errors than convolutional architectures. Includes pre-trained models and Jupyter notebooks for reproducible analysis of confusion matrices and visual bias patterns.
No commits in the last 6 months.
Stars
43
Forks
3
Language
Jupyter Notebook
License
—
Category
Last pushed
May 18, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/shikhartuli/cnn_txf_bias"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zalkikar/mlm-bias
Measuring Biases in Masked Language Models for PyTorch Transformers. Support for multiple social...
ejurasek00/Hashing_LLM_Debiasing
Repository consisting of the files used in the experiments + brief description of the experiments.
koudounasalkis/CLUES
This repo contains the code for "A Contrastive Learning Approach to Mitigate Bias in Speech...
edsonpro9891/robust-nli-analysis
🔍 Detect biases in NLP models with robust analysis, enhancing dataset integrity and achieving...
JuaniLlaberia/news-bias-detection-research
Random splits let bias detection models memorize publishers instead of learning ideology. We...