inferablehq/inferable

Build reliable AI Workflows and Agents with humans in the loop, structured outputs and durable execution.

60
/ 100
Established

Executes workflows in your own infrastructure using long polling (no inbound ports required) with automatic LLM output validation, retry logic, and versioning for backward compatibility. Provides human approval gates via Slack or Email with full context preservation, plus memoized caching for distributed side-effect deduplication. Supports Node.js, Go, and .NET SDKs with self-hosting available for complete data control.

435 stars and 79 monthly downloads. Available on npm.

Maintenance 6 / 25
Adoption 14 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

435

Forks

35

Language

TypeScript

License

MIT

Category

llm-api-gateways

Last pushed

Nov 24, 2025

Monthly downloads

79

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/inferablehq/inferable"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.