StableDiffusion and ADI-Stable-Diffusion
These are ecosystem siblings—one provides a C# wrapper for ONNX Runtime-based Stable Diffusion inference, while the other offers a lower-level C/C++ framework that could serve as the underlying accelerated backend for similar inference tasks across platforms.
About StableDiffusion
cassiebreviu/StableDiffusion
Inference Stable Diffusion with C# and ONNX Runtime
Implements the full Stable Diffusion pipeline—text encoding via CLIP, iterative denoising with UNet and scheduler algorithms, and VAE decoding—all optimized for GPU inference through ONNX Runtime's execution providers (CUDA or DirectML). Supports multiple model versions from Hugging Face and targets Windows development environments with Visual Studio or VS Code integration.
About ADI-Stable-Diffusion
Windsander/ADI-Stable-Diffusion
Accelerate your Stable Diffusion inference with the library's universal C/C++ framework design, powered by ONNXRuntime & across platforms.
Provides both a C++ library and CLI tool that decouples inference from the Stable Diffusion framework by converting models to ONNX format, enabling flexible hardware acceleration (CUDA, TensorRT, CoreML, NNAPI) through configurable ONNXRuntime providers. Supports multiple inference modes including text-to-image and image-to-image with fine-grained control over scheduler algorithms, noise prediction strategies, and tokenization methods. Built with cross-platform deployment in mind via automated build scripts for macOS, Windows, Linux, and Android with configurable compile options for package size optimization.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work