ksm26/Evaluating-AI-Agents

A hands-on course repository for Evaluating AI Agents, created with Arize AI, that teaches you how to systematically evaluate, debug, and improve AI agents using observability tools, structured experiments, and reliable metrics. Learn production-grade techniques to enhance agent performance during development and after deployment.

16
/ 100
Experimental

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 1 / 25
Maturity 1 / 25
Community 12 / 25

How are scores calculated?

Stars

1

Forks

1

Language

Jupyter Notebook

License

Last pushed

May 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ksm26/Evaluating-AI-Agents"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.