diff --git a/apps/mivisionx_inference_analyzer/README.md b/apps/mivisionx_inference_analyzer/README.md index 37e6f5bfee..4b22263829 100644 --- a/apps/mivisionx_inference_analyzer/README.md +++ b/apps/mivisionx_inference_analyzer/README.md @@ -52,19 +52,19 @@ MIVisionX provides developers with [docker images](https://hub.docker.com/u/mivi * Start docker with display -``` -% sudo docker pull mivisionx/ubuntu-16.04:latest -% xhost +local:root -% sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-16.04:latest -``` + ``` + % sudo docker pull mivisionx/ubuntu-16.04:latest + % xhost +local:root + % sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-16.04:latest + ``` * Test display with MIVisionX sample -``` -% export PATH=$PATH:/opt/rocm/mivisionx/bin -% export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/mivisionx/lib -% runvx /opt/rocm/mivisionx/samples/gdf/canny.gdf -``` + ``` + % export PATH=$PATH:/opt/rocm/mivisionx/bin + % export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/mivisionx/lib + % runvx /opt/rocm/mivisionx/samples/gdf/canny.gdf + ``` * Run [Samples](#samples) @@ -73,7 +73,7 @@ MIVisionX provides developers with [docker images](https://hub.docker.com/u/mivi ### Command Line Interface (CLI) ``` -usage: python mivisionx_inference_analyzer.py [-h] +usage: python3 mivisionx_inference_analyzer.py [-h] --model_format MODEL_FORMAT --model_name MODEL_NAME --model MODEL @@ -115,7 +115,7 @@ usage: python mivisionx_inference_analyzer.py [-h] ### Graphical User Interface (GUI) ``` -usage: python mivisionx_inference_analyzer.py +usage: python3 mivisionx_inference_analyzer.py ```

@@ -138,12 +138,13 @@ usage: python mivisionx_inference_analyzer.py * **Step 1:** Clone MIVisionX Inference Analyzer Project -``` - % cd && mkdir sample-1 && cd sample-1 - % git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git -``` + ``` + % cd && mkdir sample-1 && cd sample-1 + % git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX + % cd MIVisionX/apps/mivisionx_inference_analyzer/ + ``` - **Note:** + **Note:** + MIVisionX needs to be pre-installed + MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/` @@ -151,10 +152,10 @@ usage: python mivisionx_inference_analyzer.py * **Step 2:** Download pre-trained SqueezeNet ONNX model from [ONNX Model Zoo](https://github.com/onnx/models#open-neural-network-exchange-onnx-model-zoo) - [SqueezeNet Model](https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz) -``` - % wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz - % tar -xvf squeezenet.tar.gz -``` + ``` + % wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz + % tar -xvf squeezenet.tar.gz + ``` **Note:** pre-trained model - `squeezenet/model.onnx` @@ -164,15 +165,15 @@ usage: python mivisionx_inference_analyzer.py + View inference analyzer usage -``` - % cd ~/sample-1/MIVisionX-inference-analyzer/ - % python mivisionx_inference_analyzer.py -h -``` + ``` + % cd ~/sample-1/MIVisionX-inference-analyzer/ + % python3 mivisionx_inference_analyzer.py -h + ``` + Run SqueezeNet Inference Analyzer ``` - % python mivisionx_inference_analyzer.py --model_format onnx --model_name SqueezeNet --model ~/sample-1/squeezenet/model.onnx --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-1/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes + % python3 mivisionx_inference_analyzer.py --model_format onnx --model_name SqueezeNet --model ~/sample-1/squeezenet/model.onnx --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-1/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes ```

@@ -187,20 +188,22 @@ usage: python mivisionx_inference_analyzer.py * **Step 1:** Clone MIVisionX Inference Analyzer Project -``` - % cd && mkdir sample-2 && cd sample-2 - % git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git -``` + ``` + % cd && mkdir sample-2 && cd sample-2 + % git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX + % cd MIVisionX/apps/mivisionx_inference_analyzer/ + ``` - **Note:** + **Note:** + MIVisionX needs to be pre-installed + MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/` + * **Step 2:** Download pre-trained VGG 16 caffe model - [VGG_ILSVRC_16_layers.caffemodel](http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel) -``` - % wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel -``` + ``` + % wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel + ``` * **Step 3:** Use the command below to run the inference analyzer @@ -208,13 +211,13 @@ usage: python mivisionx_inference_analyzer.py ``` % cd ~/sample-2/MIVisionX-inference-analyzer/ - % python mivisionx_inference_analyzer.py -h + % python3 mivisionx_inference_analyzer.py -h ``` + Run VGGNet-16 Inference Analyzer ``` - % python mivisionx_inference_analyzer.py --model_format caffe --model_name VggNet-16-Caffe --model ~/sample-2/VGG_ILSVRC_16_layers.caffemodel --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-2/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes + % python3 mivisionx_inference_analyzer.py --model_format caffe --model_name VggNet-16-Caffe --model ~/sample-2/VGG_ILSVRC_16_layers.caffemodel --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-2/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes ```

@@ -227,12 +230,13 @@ usage: python mivisionx_inference_analyzer.py * **Step 1:** Clone MIVisionX Inference Analyzer Project -``` - % cd && mkdir sample-3 && cd sample-3 - % git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git -``` + ``` + % cd && mkdir sample-3 && cd sample-3 + % git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX + % cd MIVisionX/apps/mivisionx_inference_analyzer/ + ``` - **Note:** + **Note:** + MIVisionX needs to be pre-installed + MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/` @@ -240,28 +244,30 @@ usage: python mivisionx_inference_analyzer.py * **Step 2:** Download pre-trained VGG 16 NNEF model -``` - % mkdir ~/sample-3/vgg16 - % cd ~/sample-3/vgg16 - % wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz - % tar -xvf vgg16.onnx.nnef.tgz -``` + ``` + % mkdir ~/sample-3/vgg16 + % cd ~/sample-3/vgg16 + % wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz + % tar -xvf vgg16.onnx.nnef.tgz + ``` * **Step 3:** Use the command below to run the inference analyzer + View inference analyzer usage ``` - % cd ~/sample-3/MIVisionX-inference-analyzer/ - % python mivisionx_inference_analyzer.py -h + % cd ~/sample-3/MIVisionX-inference-analyzer/ + % python3 mivisionx_inference_analyzer.py -h ``` + Run VGGNet-16 Inference Analyzer ``` - % python mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes + % python3 mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes ``` * **Preprocessing the model:** Use the --add/--multiply option to preprocess the input images - % python mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes --add [-2.1179,-2.0357,-1.8044] --multiply [0.0171,0.0175,0.0174] + ``` + % python3 mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes --add [-2.1179,-2.0357,-1.8044] --multiply [0.0171,0.0175,0.0174] + ```