awesome-openclaw-usecases-zh and awesome-openclaw-examples
These are complementary resources that serve different audiences: the Chinese-language collection curates best practices and real-world scenarios for OpenClaw users, while the examples repository provides executable automation use cases with concrete ClawHub skill implementations, prompts, and measurable outputs for hands-on learning and reuse.
About awesome-openclaw-usecases-zh
AlexAnys/awesome-openclaw-usecases-zh
🇨🇳 OpenClaw(个人智能体)中文最佳用例大全 | 40 个真实场景(国内特色 + 海外的国内生态适配):自动化办公、内容创作、服务器运维、个人助理、知识管理 | 新手友好 | Chinese guide for OpenClaw AI agent use cases
# Technical Summary Curates 42+ verified OpenClaw AI agent use cases with localized implementations for Chinese ecosystems (Feishu, DingTalk, WeChat Work, AKShare), organizing patterns across automation, content creation, DevOps, and productivity. The collection pairs community-adapted international prompts with original domestic workflows, structured with standardized templates covering pain points, capabilities, required skills, and setup instructions with copy-paste prompts. Emphasizes practical orchestration patterns—multi-agent coordination via STATE.yaml, sub-agent parallelization, cron-based heartbeats, and stateless webhook delegation to n8n—enabling both no-code (prompt copying) and code-first architectures for persistent 24/7 autonomous execution.
About awesome-openclaw-examples
OthmaneBlial/awesome-openclaw-examples
Awesome OpenClaw examples: 100 tested, real-world automation usecases built with ClawHub skills, runnable scripts, prompts, KPIs, and sample outputs.
Covers 100 production-ready automation workflows organized by functional team (engineering, support, marketing, finance, etc.), each with executable scripts, sample outputs, security considerations, and KPI templates rather than abstract descriptions. Built on composable ClawHub skills that users can inspect and audit before deployment, emphasizing transparency and verifiable quality through maintainer-tested examples and before/after output comparisons. The collection prioritizes repeat automation patterns that map directly to existing business processes—PR triage, cost monitoring, document processing, escalation routing—rather than generic AI demonstrations.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work