lopezleandro03/LLMs-as-a-Service

Deploy LLMs using Azure Model-as-a-Service (MaaS) and Terraform

12
/ 100
Experimental

This project helps application developers and platform engineers integrate large language models like Mistral and Llama2 into their applications using Azure's Model-as-a-Service offering. It takes your infrastructure-as-code definitions and provides fully deployed, managed LLM endpoints ready for API calls, without requiring you to set up or maintain the underlying infrastructure. This is ideal for those building intelligent applications who need programmatic access to advanced generative AI capabilities.

No commits in the last 6 months.

Use this if your organization uses an 'everything-as-code' approach for deploying cloud resources and you need to provision Azure AI Studio's Model-as-a-Service for large language models programmatically.

Not ideal if you prefer to deploy and manage cloud resources manually through the Azure Portal or Azure AI Studio wizards, or if you are not using Terraform for infrastructure provisioning.

cloud-resource-provisioning generative-ai-deployment intelligent-application-development platform-engineering infrastructure-as-code
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

HCL

License

Last pushed

Mar 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/lopezleandro03/LLMs-as-a-Service"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.