Inference

Submitted by valya on

Here we discuss how to deal with the Inference step of your ML pipeline.

A simple recipe how to serve PyTorch model in Python Flask framework can be found here. If you're looking for a complete solution, the TFaaS framework provides a web server to serve any TensorFlow models and provides REST APIs to access and manage your models.