yuhui-zh15/NeQA

Official Code Release for "Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models" (ACL 2023 Findings)

13
/ 100
Experimental

This project helps AI researchers understand how negation in questions affects the performance of large language models as they grow in size or data. By inputting a specialized dataset of negation-based questions, it reveals complex scaling trends (inverse, U-shaped, positive) that differ from typical model behavior. The output provides insights into why these models struggle with negation and how different prompting methods or model families impact their ability to understand it. It is designed for researchers studying language model capabilities and scaling laws.

No commits in the last 6 months.

Use this if you are a language model researcher investigating model scaling behavior, particularly concerning how models process and understand negation.

Not ideal if you are a practitioner looking for a tool to directly improve or deploy a language model for a specific application, as this is a research analysis tool.

AI research language model scaling natural language understanding negation analysis model interpretability
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Jupyter Notebook

License

Last pushed

Jun 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/yuhui-zh15/NeQA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.