jenkinsci/Enhancing-LLM-with-Jenkins-Knowledge

🚀 this project aims to develop an app using an existing open-source LLM with data collected for domain-specific Jenkins knowledge that can be fine-tuned locally and set up with a proper UI for the user to interact with.

42
/ 100
Emerging

Implements a full-stack architecture with a Flask backend serving Llama2 for Jenkins-specific queries and a Vite frontend for user interaction. Supports local fine-tuning via LoRA on Google Colab, model quantization to GGML format for CPU inference, and deployment of custom models to Hugging Face, reducing model size from 13GB to 6.7GB through Q8_0 quantization.

No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 9 / 25
Community 17 / 25

How are scores calculated?

Stars

15

Forks

11

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/jenkinsci/Enhancing-LLM-with-Jenkins-Knowledge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.