vector-storage and vectorstores
These are **complements**: Vector Storage provides a browser-based vector database implementation using local storage and OpenAI embeddings, while Vectorstores offers a framework for integrating multiple vector database backends into AI applications, and they could be used together where Vectorstores abstracts Vector Storage as one available backend option.
About vector-storage
nitaiaharoni1/vector-storage
Vector Storage is a vector database that enables semantic similarity searches on text documents in the browser's local storage. It uses OpenAI embeddings to convert documents into vectors and allows searching for similar documents based on cosine similarity.
Persists document vectors and metadata in browser IndexedDB with configurable storage limits, using an LRU eviction policy to automatically remove least-accessed documents when capacity is exceeded. Provides metadata filtering on search results and supports batched document ingestion, with optional debouncing for IndexedDB writes. Integrates with OpenAI's embedding API (configurable model selection) and exposes a straightforward JavaScript API designed for client-side semantic search workflows.
About vectorstores
marcusschiesser/vectorstores
Vectorstores is a framework for using vector databases in your AI applications
Provides unified data ingestion and retrieval across multiple vector databases with pluggable provider packages, while maintaining compatibility across Node.js, Deno, Bun, and edge runtimes. Built as a lightweight alternative to LlamaIndexTS (77.5kb gzip), it integrates with embedding providers like OpenAI and works seamlessly with the Vercel AI SDK for context engineering workflows.
Scores updated daily from GitHub, PyPI, and npm data. How scores work