LaxmanNandi/MCH-Research
Conservation law for LLM context sensitivity: ΔRCI × Var_Ratio ≈ K(domain). Seven-paper program across 14 models, 8 vendors, 112,500 responses. Medical + philosophy domains. Five papers published on Preprints.org. Safety taxonomy, entanglement theory, stochastic incompleteness.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/LaxmanNandi/MCH-Research"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PaddlePaddle/PaddleNLP
Easy-to-use and powerful LLM and SLM library with awesome model zoo.
meta-llama/llama-cookbook
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started...
arcee-ai/mergekit
Tools for merging pretrained large language models.
changyeyu/LLM-RL-Visualized
🌟100+ 原创 LLM / RL 原理图📚,《大模型算法》作者巨献!💥(100+ LLM/RL Algorithm Maps )
mindspore-lab/step_into_llm
MindSpore online courses: Step into LLM