Awesome-LLM-Reasoning and Awesome-LLM-reasoning-papers
These are complements—one curates practical reasoning tools and frameworks (A) while the other compiles the underlying academic papers and benchmarks that inform those implementations (B), so researchers and practitioners would use both together to understand reasoning from theory to application.
About Awesome-LLM-Reasoning
atfortes/Awesome-LLM-Reasoning
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
A curated collection of papers and resources covering reasoning techniques in LLMs and multimodal systems, from foundational prompting methods to advanced inference-time scaling approaches. Organizes research by technique categories—including unimodal reasoning, multimodal reasoning, and scaling smaller models—alongside analysis studies on CoT faithfulness, token bias, and reasoning stability. Complements a companion benchmarking project (LLMSymbolicReasoningBench) for empirically testing symbolic reasoning capabilities.
About Awesome-LLM-reasoning-papers
Oznake/Awesome-LLM-reasoning-papers
This repository offers a well-organized collection of resources focused on reasoning in Large Language Models (LLMs). Explore foundational papers, evaluation benchmarks, and practical tools to enhance your understanding of LLM reasoning. 🐙🌐
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work