jpmorganchase/inference-server
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
No commits in the last 6 months. Available on PyPI.
Stars
57
Forks
16
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 07, 2025
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/jpmorganchase/inference-server"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
combust/mleap
MLeap: Deploy ML Pipelines to Production
ml-tooling/opyrator
🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more.
ebhy/budgetml
Deploy a ML inference service on a budget in less than 10 lines of code.
SocAIty/APIPod
Create web-APIs for long-running tasks. Job based task handling. Get the result with the job id...
tanujjain/deploy-ml-model
Deploying a simple machine learning model to an AWS ec2 instance using flask and docker.