llama-cpp-agent and llm-axe

These are competitors—both provide frameworks for building LLM applications with function calling capabilities, but llama-cpp-agent is more mature and feature-complete (with structured output support and significantly higher adoption), while llm-axe offers a simpler alternative approach.

llama-cpp-agent
79
Verified
llm-axe
44
Emerging
Maintenance 16/25
Adoption 19/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 620
Forks: 69
Downloads: 8,620
Commits (30d): 1
Language: Python
License:
Stars: 275
Forks: 39
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About llama-cpp-agent

Maximilian-Winter/llama-cpp-agent

The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.

Leverages guided sampling with JSON schema grammars to constrain model outputs, enabling function calling and structured output even on models not fine-tuned for these tasks. Integrates with multiple inference backends including llama.cpp, TGI, and vLLM servers, and supports agentic workflows through conversational, sequential, and mapping chain patterns with tool integration from Pydantic, llama-index, and OpenAI schemas.

About llm-axe

emirsahin1/llm-axe

A simple, intuitive toolkit for quickly implementing LLM powered applications.

Scores updated daily from GitHub, PyPI, and npm data. How scores work