evaluation-guidebook and LLMEvaluation
About evaluation-guidebook
huggingface/evaluation-guidebook
Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard and designing lighteval!
This guide helps you verify that a large language model (LLM) performs as expected for your specific use case. It takes your LLM and task requirements as input and provides frameworks, methodologies, and practical tips for assessing model quality. This is ideal for anyone working with LLMs, from researchers to hobbyists, who needs to ensure their models are reliable and effective.
About LLMEvaluation
alopatenko/LLMEvaluation
A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use cases, promote the adoption of best practices in LLM assessment, and critically assess the effectiveness of these evaluation methods.
This compendium helps academics and industry professionals effectively evaluate Large Language Models (LLMs) and their applications. It takes in various LLM models or systems and outputs a comprehensive understanding of their performance, limitations, and suitability for specific tasks. Anyone responsible for deploying or assessing AI models in their organization, such as AI product managers, research scientists, or data scientists, would find this useful.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work