The OpenVINO Model Server requires a trained model in Intermediate Representation (IR) or ONNX format on which it performs inference. Options to download appropriate models include:
- Downloading models from the Open Model Zoo
- Using the Model Optimizer to convert models to the IR format from formats like TensorFlow*, ONNX*, Caffe*, MXNet* or Kaldi*.
This guide uses the face detection model.
Use the steps in this guide to quickly start using OpenVINO™ Model Server. In these steps, you:
- Prepare Docker*
- Download and build the OpenVINO™ Model server
- Download a model
- Start the model server container
- Download the example client components
- Download data for inference
- Run inference
- Review the results
To see if you have Docker already installed and ready to use, test the installation:
$ docker run hello-world
If you see a test image and an informational message, Docker is ready to use. Go to download and build the OpenVINO Model Server. If you don't see the test image and message:
Continue to Step 2 to download and build the OpenVINO Model Server.
- Download the Docker* image that contains the OpenVINO Model Server. This image is available from DockerHub:
docker pull openvino/model_server:latest
or build the docker image openvino/model_server:latest with a command:
make docker_build
Download the model components to the model/1
directory. Example command using curl:
curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/1/face-detection-retail-0004.xml -o model/1/face-detection-retail-0004.bin
Start the Model Server container:
docker run -d -u $(id -u):$(id -g) -v $(pwd)/model:/models/face-detection -p 9000:9000 openvino/model_server:latest \
--model_path /models/face-detection --model_name face-detection --port 9000 --log_level DEBUG --shape auto
The Model Server expects models in a defined folder structure. The folder with the models is mounted as /models/face-detection/1
, such as:
models/
└── face-detection
└── 1
├── face-detection-retail-0004.bin
└── face-detection-retail-0004.xml
Use these links for more information about the folder structure and how to deploy more than one model at the time:
- Prepare models
- Deploy multiple models at once and to start a Docker container with a configuration file
Model scripts are available to provide an easy way to access the Model Server. This example uses a face detection script and uses curl to download components.
- Use this command to download all necessary components:
curl https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/client_utils.py -o client_utils.py https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/face_detection.py -o face_detection.py https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/client_requirements.txt -o client_requirements.txt
For more information:
- Download example images for inference. This example uses a file named people1.jpeg.
- Put the image in a folder by itself. The script runs inference on all images in the folder.
curl --create-dirs https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/images/people/people1.jpeg -o images/people1.jpeg
-
Go to the folder in which you put the client script.
-
Install the dependencies:
pip install -r client_requirements.txt
- Create a folder in which inference results will be put:
mkdir results
- Run the client script:
python face_detection.py --batch_size 1 --width 600 --height 400 --input_images_dir images --output_dir results
In the results
folder, look for an image that contains the inference results.
The result is the modified input image with bounding boxes indicating detected faces.