Skip to content

Latest commit

 

History

History
41 lines (26 loc) · 1.8 KB

DOCKER.md

File metadata and controls

41 lines (26 loc) · 1.8 KB

Model Serving on PAI docker env

Contents

  1. Basic environment

Basic environment

First of all, PAI runs all jobs in Docker container.

Install Docker-CE if you haven't. Register an account at public Docker registry Docker Hub if you do not have a private Docker registry.

We use TensorFlow model serving as an example, for how to serve a TensorFlow model, please refer to its serving tutorial.

You can also jump to Serving a TensorFlow model using pre-built images on Docker Hub.

We need to build a TensorFlow serving image with GPU support to serve a TensorFlow model on PAI, this can be done in two steps:

  1. Build a base Docker image for PAI. We prepared a base Dockerfile which can be built directly.

    $ cd ../Dockerfiles/cuda9.0-cudnn7
    $ sudo docker build -f Dockerfile.build.base \
    >                   -t pai.build.base:hadoop2.7.2-cuda9.0-cudnn7-devel-ubuntu16.04 .
    $ cd -
  2. Build the TensorFlow serving Docker image for PAI. We use the TensorFlow serving Dockerfile provided in its tutorial.

    $ sudo docker build -f Dockerfile.example.tensorflow-serving .
    

    Then push the Docker image to a Docker registry:

    $ sudo docker tag pai.example.tensorflow-serving USER/pai.example.tensorflow-serving
    $ sudo docker push USER/pai.example.tensorflow-serving

    Note: Replace USER with the Docker Hub username you registered, you will be required to login before pushing Docker image.