mickymultani/Streaming-LLM-Chat

Interactive chat application leveraging OpenAI's GPT-4 for real-time conversation simulations. Built with Flask, this project showcases streaming LLM responses in a user-friendly web interface.

24
/ 100
Experimental

Implements server-sent events (SSE) for token-by-token streaming of GPT-4 responses directly to the browser, eliminating latency in displaying model output. The Flask backend manages OpenAI API integration with environment-based credential handling, while automatic browser launching simplifies the developer experience. Requires Python 3.10+ and handles real-time bidirectional communication between client and server for conversational interactions.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 1 / 25
Community 16 / 25

How are scores calculated?

Stars

25

Forks

7

Language

Python

License

Last pushed

Apr 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/mickymultani/Streaming-LLM-Chat"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.