playwithllm/store

A RAG-driven image product search that showcases MERN, Milvus for vector indexing, Transformers, and a local Ollama Gemma LLM. Explore integrated embeddings, store vectors in Milvus, and manipulate queries with advanced language understanding.

23
/ 100
Experimental

Implements a complete RAG pipeline combining visual and semantic search through Transformers-based multimodal embeddings stored in Milvus, with Node.js/Express REST APIs bridging the frontend React interface to Ollama's Gemma for query augmentation. The architecture uses semantic similarity matching for product retrieval while leveraging the LLM to interpret natural language queries and refine results. Designed for extensibility with pluggable embedding models and data sources, enabling rapid experimentation with different vector indexing strategies and language models without external API dependencies.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 1 / 25
Community 16 / 25

How are scores calculated?

Stars

22

Forks

7

Language

TypeScript

License

Last pushed

Apr 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/playwithllm/store"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.