serving and simple_tensorflow_serving

Serving system B is a lightweight, simplified alternative built on top of the same TensorFlow ecosystem as A, offering easier setup for straightforward inference scenarios where A's high-performance distributed architecture would be overkill.

serving
57
Established
simple_tensorflow_serving
51
Established
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 6,349
Forks: 2,200
Downloads:
Commits (30d): 0
Language: C++
License: Apache-2.0
Stars: 758
Forks: 186
Downloads:
Commits (30d): 0
Language: JavaScript
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About serving

tensorflow/serving

A flexible, high-performance serving system for machine learning models

Supports multi-model and multi-version serving with zero-downtime model updates, canary deployments, and A/B testing. Exposes gRPC and REST APIs while featuring a request batching scheduler that groups inference calls for efficient GPU execution with configurable latency bounds. Natively integrates TensorFlow SavedModels but extends to non-TensorFlow models, embeddings, and feature transformations through a modular architecture.

About simple_tensorflow_serving

tobegit3hub/simple_tensorflow_serving

Generic and easy-to-use serving service for machine learning models

Scores updated daily from GitHub, PyPI, and npm data. How scores work