First of all, PAI runs all jobs in Docker container.
Install Docker-CE if you haven't. Register an account at public Docker registry Docker Hub if you do not have a private Docker registry.
We use TensorFlow model serving as an example, for how to serve a TensorFlow model, please refer to its serving tutorial.
You can also jump to Serving a TensorFlow model using pre-built images on Docker Hub.
We need to build a TensorFlow serving image with GPU support to serve a TensorFlow model on PAI, this can be done in two steps:
-
Build a base Docker image for PAI. We prepared a base Dockerfile which can be built directly.
$ cd ../Dockerfiles/cuda9.0-cudnn7 $ sudo docker build -f Dockerfile.build.base \ > -t pai.build.base:hadoop2.7.2-cuda9.0-cudnn7-devel-ubuntu16.04 . $ cd -
-
Build the TensorFlow serving Docker image for PAI. We use the TensorFlow serving Dockerfile provided in its tutorial.
$ sudo docker build -f Dockerfile.example.tensorflow-serving .
Then push the Docker image to a Docker registry:
$ sudo docker tag pai.example.tensorflow-serving USER/pai.example.tensorflow-serving $ sudo docker push USER/pai.example.tensorflow-serving
Note: Replace USER with the Docker Hub username you registered, you will be required to login before pushing Docker image.