ollama-benchmark and llm-optimizer-benchmark
About ollama-benchmark
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
This tool helps you quickly understand the real performance of your local Large Language Models (LLMs) running via Ollama. It takes your existing local LLM setup and provides a clear tokens-per-second metric. AI/ML practitioners, researchers, or anyone experimenting with local LLMs can use this to assess different models and hardware configurations.
About llm-optimizer-benchmark
epfml/llm-optimizer-benchmark
Benchmarking Optimizers for LLM Pretraining
This project offers a standardized way to compare different optimization techniques used in training Large Language Models (LLMs). It takes various optimizer configurations, model sizes, and training durations as input and produces benchmark results showing which optimizer performs best under specific conditions. LLM researchers and practitioners would use this to inform their choice of optimization methods for pretraining LLMs.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work