skillz and local-skills-mcp

These are complements: skillz provides a standardized MCP interface for skill loading that works across multiple LLM clients, while local-skills-mcp implements that same pattern specifically optimized for filesystem-based skills with lazy loading, allowing them to be used together to serve different deployment scenarios (remote vs. local skills).

skillz
65
Established
local-skills-mcp
55
Established
Maintenance 10/25
Adoption 17/25
Maturity 22/25
Community 16/25
Maintenance 10/25
Adoption 11/25
Maturity 18/25
Community 16/25
Stars: 374
Forks: 35
Downloads: 1,331
Commits (30d): 0
Language: Python
License: MIT
Stars: 20
Forks: 6
Downloads: 140
Commits (30d): 0
Language: TypeScript
License: MIT
No risk flags
No risk flags

About skillz

intellectronica/skillz

An MCP server for loading skills (shim for non-claude clients).

Discovers and exposes Claude-style skills (defined via `SKILL.md` with YAML metadata) as MCP tools for non-Claude agents like Copilot and Cursor, supporting flexible skill packaging as directories, zip archives, or `.skill` files with bundled resources. Uses stdio, HTTP, or SSE transports and can execute helper scripts within each skill, with optional Docker containerization for sandboxed execution. Integrates with the Skills Supermarket directory and provides both Python CLI and Gemini CLI extension interfaces for agent configuration.

About local-skills-mcp

kdpa-llc/local-skills-mcp

Universal MCP server enabling any LLM or AI agent to utilize expert skills from your local filesystem. Reduces context consumption through lazy loading. Works with Claude, Cline, and any MCP-compatible client.

Implements stdio transport with automatic skill discovery across multiple configurable directories, aggregating skills from built-in sources, global paths, project-specific locations, and custom environment variables. Features hot-reload capability for instant skill updates without server restart, and uses lazy-loaded YAML frontmatter metadata (~50 tokens per skill) to minimize context overhead while deferring full skill content retrieval on-demand. Exposes a single `get_skill` tool to any MCP-compatible client, enabling skill reuse across Claude, Cline, Continue.dev, and custom AI agents regardless of underlying LLM.

Scores updated daily from GitHub, PyPI, and npm data. How scores work