StarlightSearch/EmbedAnything

Highly Performant, Modular, Memory Safe and Production-ready Inference, Ingestion and Indexing built in Rust 🦀

61
/ 100
Established

Supports multimodal ingestion (PDFs, images, audio) with pluggable vector database adapters and multiple embedding backends including Candle, ONNX, and cloud models. Uses Rust's memory-safe concurrency and streaming architecture to separate document preprocessing from inference across threads via MPSC channels, eliminating sequential bottlenecks while preventing memory leaks. Offers dense, sparse, and late-interaction embedding strategies with built-in semantic chunking methods for RAG workflows.

1,174 stars. Actively maintained with 3 commits in the last 30 days.

No Package No Dependents
Maintenance 16 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,174

Forks

111

Language

Rust

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/StarlightSearch/EmbedAnything"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.