We describe TensorFlow-Serving, a system to serve machine learning models inside
Google which is also available in the cloud and via open-source. It is extremely
flexible in terms of the types of ML platforms it supports, and ways to integrate
with systems that convey new models and updated versions from training to serving.
At the same time, the core code paths around model lookup and inference have been
carefully optimized to avoid performance pitfalls observed in naive
implementations. The paper covers the architecture of the extensible serving
library, as well as the distributed system for multi-tenant model hosting. Along
the way it points out which extensibility points and performance optimizations
turned out to be especially important based on production experience.