kddubey/cappr
Completion After Prompt Probability. Make your LLM make a choice
Computes conditional log-probabilities of completions given prompts, enabling classification tasks by ranking candidate answers. Supports both local models (via llama-cpp and Hugging Face transformers) and integrates prompt-prefix caching to optimize repeated computations across similar prompts. Provides batch processing, prior incorporation, and token-level probability aggregation for applications like DPO training and chain-of-thought answer extraction.
Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Stars
82
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 02, 2024
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/kddubey/cappr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
linshenkx/prompt-optimizer
一款提示词优化器,助力于编写高质量的提示词
Undertone0809/promptulate
🚀Lightweight Large language model automation and Autonomous Language Agents development...
CTLab-ITMO/CoolPrompt
Automatic Prompt Optimization Framework
microsoft/sammo
A library for prompt engineering and optimization (SAMMO = Structure-aware Multi-Objective...
Eladlev/AutoPrompt
A framework for prompt tuning using Intent-based Prompt Calibration