gmum/beta-CFG

This paper presents β-CFG, a dynamic guidance method for text-to-image diffusion models. Unlike standard CFG, which uses a fixed guidance scale, β-CFG adapts guidance strength over time using a β-distribution. This improves image quality, keeps sampling closer to the data manifold, and achieves better FID while maintaining prompt alignment.

13
/ 100
Experimental

When creating images from text descriptions using AI, there's a common struggle: either the image looks high-quality but doesn't quite match your prompt, or it perfectly matches the prompt but looks less artistic. This tool helps fine-tune the balance, dynamically adjusting how strictly the AI follows your text prompt. It takes your text prompt and generates a higher-quality image that maintains strong relevance to your original description. This is for AI artists, designers, or anyone generating visual content from text.

No commits in the last 6 months.

Use this if you are generating images from text prompts and want to improve the overall quality of the generated image while ensuring it still accurately represents your text description.

Not ideal if you need a tool for basic image editing, photo manipulation, or generating images without any text input.

AI-art text-to-image-generation digital-design content-creation generative-AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

Last pushed

Mar 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/gmum/beta-CFG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.