nourdesoukizz/Reasoning-Rationalizing

we investigate whether models can maintain correct reasoning when exposed to incorrect hints, and whether the timing of this exposure (before vs. after questions) affects their robustness to misinformation.

11
/ 100
Experimental
No License No Package No Dependents
Maintenance 10 / 25
Adoption 0 / 25
Maturity 1 / 25
Community 0 / 25

How are scores calculated?

Stars

Forks

Language

Jupyter Notebook

License

Last pushed

Feb 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/nourdesoukizz/Reasoning-Rationalizing"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.