ShishirPatil/gorilla
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
Provides fine-tuned models and the Berkeley Function Calling Leaderboard for benchmarking LLMs on API invocation across 1,600+ real-world APIs with retrieval-augmented training for improved accuracy. Includes GoEx, a runtime engine with post-facto validation and damage confinement for safely executing LLM-generated API calls and code, plus OpenFunctions-V2 supporting parallel execution and multiple data types. Expands beyond single-turn scenarios with multi-turn agentic benchmarks featuring web search, memory management, and error recovery evaluation.
12,759 stars. Actively maintained with 2 commits in the last 30 days.
Stars
12,759
Forks
1,334
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ShishirPatil/gorilla"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
Maximilian-Winter/llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models...
mozilla-ai/any-llm
Communicate with an LLM provider using a single interface
CliDyn/climsight
A next-generation climate information system that uses large language models (LLMs) alongside...
rizerphe/local-llm-function-calling
A tool for generating function arguments and choosing what function to call with local LLMs
OoriData/OgbujiPT
Client-side toolkit for using large language models, including where self-hosted