ralph-orchestrator and opencode-ralph-rlm
One project appears to be an improved implementation of a technique, while the other is a plugin that leverages a similar "Ralph" outer loop within an iterative AI development workflow, suggesting the latter might be a specialized application or a tool that incorporates the principles of the former.
About ralph-orchestrator
mikeyobrien/ralph-orchestrator
An improved implementation of the Ralph Wiggum technique for autonomous AI agent orchestration
Implements a hat-based persona system with backpressure gates (tests, lint, typecheck) that coordinate through events, supporting multiple LLM backends (Claude, Gemini, Copilot CLI) and persistent memories. Runs as a Rust RPC API with web dashboard, MCP server over stdio, or CLI; includes human-in-the-loop via Telegram for agent questions and proactive guidance during orchestration loops.
About opencode-ralph-rlm
doeixd/opencode-ralph-rlm
OpenCode plugin: Ralph outer loop + RLM inner loop — iterative AI development with file-first discipline and sub-agent support
**Technical Summary:** Implements a two-tier AI coding loop where a strategist session (Ralph) supervises fresh worker sessions (RLM) that each load state from files rather than inheriting noisy context windows, ensuring each attempt starts with surgical precision and accumulated learnings without prior-turn noise. The architecture treats files as the persistent memory primitive—`PLAN.md`, `RLM_INSTRUCTIONS.md`, and `NOTES_AND_LEARNINGS.md` carry forward strategy and discoveries across disposable context windows, while `rlm_grep`/`rlm_slice` enforce surgical file access. Integrates as an OpenCode plugin and relies on a `verify.command` exit condition (tests, typechecks, linters) as the single source of truth for loop termination.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work