fulviomascara/llamav2_local

Projeto de GenAI usando LLAMA V2 Local estilo ChatPDF

36
/ 100
Emerging

Implements a retrieval-augmented generation (RAG) pipeline using LangChain for semantic document search, ChromaDB for vector storage of PDF embeddings, and Llama 2-Chat (7B-70B parameters) running locally for inference—no external API calls required. Provides a Jupyter notebook workflow that generates sentence embeddings, stores them in a vector database, and surfaces a Gradio web UI for multi-turn conversational interaction with uploaded PDFs.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 20 / 25

How are scores calculated?

Stars

61

Forks

25

Language

Jupyter Notebook

License

Last pushed

Aug 27, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/fulviomascara/llamav2_local"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.