llama-cpp-agent and llm-axe
These are competitors—both provide frameworks for building LLM applications with function calling capabilities, but llama-cpp-agent is more mature and feature-complete (with structured output support and significantly higher adoption), while llm-axe offers a simpler alternative approach.
About llama-cpp-agent
Maximilian-Winter/llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
Leverages guided sampling with JSON schema grammars to constrain model outputs, enabling function calling and structured output even on models not fine-tuned for these tasks. Integrates with multiple inference backends including llama.cpp, TGI, and vLLM servers, and supports agentic workflows through conversational, sequential, and mapping chain patterns with tool integration from Pydantic, llama-index, and OpenAI schemas.
About llm-axe
emirsahin1/llm-axe
A simple, intuitive toolkit for quickly implementing LLM powered applications.
Scores updated daily from GitHub, PyPI, and npm data. How scores work