prompt-optimizer and SCOPE

Project B is a framework for automatic prompt optimization, which likely leverages or integrates with prompt optimizers like Project A, suggesting they are complementary in an ecosystem where B orchestrates the use of tools like A.

prompt-optimizer
72
Verified
SCOPE
38
Emerging
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 6/25
Adoption 9/25
Maturity 13/25
Community 10/25
Stars: 24,228
Forks: 2,893
Downloads:
Commits (30d): 81
Language: TypeScript
License:
Stars: 70
Forks: 6
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About prompt-optimizer

linshenkx/prompt-optimizer

一款提示词优化器,助力于编写高质量的提示词

Supports multi-model LLM backends (OpenAI, Gemini, DeepSeek, etc.) with dual optimization modes for system and user prompts, plus advanced testing via context variables, multi-turn sessions, and function calling. Available as web app, desktop client, Chrome extension, Docker container, and MCP server for Claude Desktop integration—with client-side data processing and optional password protection for secure deployment.

About SCOPE

JarvisPei/SCOPE

SCOPE: Self-evolving Context Optimization via Prompt Evolution - A framework for automatic prompt optimization

Learns from agent execution traces using a dual-stream memory system that separates task-specific tactical rules from reusable strategic guidelines, with automatic memory optimization via conflict resolution and subsumption pruning. Integrates with 100+ LLM providers through LiteLLM (OpenAI, Anthropic, etc.) and provides a universal async API for injecting evolved prompts into agent workflows. Features Best-of-N candidate selection, configurable synthesis modes, and customizable prompt templates to adapt SCOPE for specialized agent domains without modifying core code.

Scores updated daily from GitHub, PyPI, and npm data. How scores work