In this module you will learn to edit a pre-trained object recognition model to perform a binary classification to classify an object as either a Hotdog or Not Hotdog. This fun exercise is based on a popular sitcom and demonstrates the extensibility of AWS DeepLens. We will edit the model in Amazon SageMaker. Amazon SageMaker is an end to end machine learning platform to train and host your models to production.
In this exercise, you will learn to:
- Load your notebook into an Amazon SageMaker notebook instance.
- Train your model in the notebook.
- Save your model artifacts into your S3 bucket.
- Import the model artifacts to DeepLens.
- Visit https://s3.console.aws.amazon.com/s3/home?region=us-east-1# to access Amazon S3 console.
- Make sure you are on the US East (N.Virginia) region. (This can be selected in the top, right-hand corner of the screen.)
- Click on "Create bucket".
- Name the bucket deeplens-sagemaker-your-full-name (Please note: It is important that is prefixed with deeplens-sagemaker prefix, else these services cannot access. Click "Next" twice.
- In the "Manage public permissions" section, choose "Grant public read access", and click "Next".
- Click "Create bucket".
- After you create the bucket, click on that bucket in your S3 bucket list, and create a folder named "test" in the bucket.
- Visit https://console.aws.amazon.com/sagemaker/home?region=us-east-1#/dashboard to access Amazon SageMaker console.
- Make sure you are on the US East (N. Virginia) region.
- Click on notebook instances and choose Create notebook instance.
- Enter an instance name, choose the instance type (ml.t2.medium). Note that you will be charged for the notebook instance.
- For IAM role, choose "Create new role".
- In the screen that appears, click "Any S3 bucket", and click "Create role".
- Click "Create notebook instance".
- Verify that your instance's status changes to InService.
- Choose the notebook instance and click on Open
- The Jupyter notebook instance will open.
- Download the hotdog-not-hotdog.ipynb notebook.
- Open the file in a text editor and search for the string "your s3 bucket name here". Replace that string with the name of the S3 bucket you created in the previous section.
- Upload it to the notebook instance by choosing the Upload option.
- Once uploaded, click on the notebook to launch it.
- Read through the notebook.
- Execute each cell by using the play button on the navigation bar or using shift+ enter or cmd + enter
- This will result in the json and params files being uploaded to your S3 bucket (it takes a couple of minutes for the json and params file to be created).
- The executions have completed when you see the following output at the end:
s3.Object(bucket_name='deeplens-sagemaker-your-full-name', key='test/hotdog_or_not_model-0000.params')
- Now your trained artifacts are available on S3.
Since these models are not yet optimized, they will run on the CPU and will be very slow. For the purpose of this exercise, we have provided the optimized version of the machine learning model and a lambda function that does inference on your AWS DeepLens. This optimized model runs on the on-board GPU to provide accurate and responsive inferences.
- Navigate to Lambda console: https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions
- Make sure you are on US East-1- N.Virginia region
- Click on "Create function".
- Click "Author from scratch".
- Name it deeplens-hotdog-no-hotdog-your-full-name (deeplens-hotdog-not-hotdog should be the prefix)
- In the Runtime, change it to python 2.7
- Click "Choose an existing role"
- Choose the existing deeplens_lambda role
- Click "Create function".
- In the handler box, change it to greengrassHelloWorld.function_handler
- In the code entry type, choose Upload a file from Amazon S3. copy paste this S3 link: https://s3.amazonaws.com/deeplens-managed-resources/lambdas/hotdog-no-hotdog/new_hot_dog_lambda.zip
- Click Save
- Make sure you verify the code you copied exists in the function
- In Actions tab, click publish and provide your name_sagemaker as the description
- Navigate to Projects in the AWS DeepLens console
- Click on Create a new project
- Click on Create a new blank project template
- Give the project a name: hotdog-gpu-your-full-name
- Click on add model and choose the deeplens-squeezenet
- Click on Add function and choose the deeplens-hotdog-no-hotdog-your-full-name function
- Create project
- Choose the project you just created
- Click deploy to device
- Choose your device
- Review and hit deploy
Next you will deploy the project you just created.
- From Deeplens console, On the Projects screen, choose the radio button to the left of your project name, then choose Deploy to device.
- On the Target device screen, from the list of AWS DeepLens devices, choose the radio button to the left of the device that you want to deploy this project to. An AWS DeepLens device can have only one project deployed to it at a time.
- Choose Review.
This will take you to the Review and deploy screen.
If a project is already deployed to the device, you will see an error message "There is an existing project on this device. Do you want to replace it? If you Deploy, AWS DeepLens will remove the current project before deploying the new project."
- On the Review and deploy screen, review your project and choose Deploy to deploy the project.
This will take you to to device screen, which shows the progress of your project deployment.
- You need mplayer to view the project output from Deeplens device. For Windows, follow the installation instructions at this link: http://www.mplayerhq.hu/design7/dload.html For Mac, install mplayer by using command below in the terminal window:
brew install mplayer
- Wait until the project is deployed and you see the message Deployment of project, succeeded. After project is successfully deployed, use the command below from terminal window to view project output stream:
ssh aws_cam@<IP Address of your deeplens device> cat /tmp/results.mjpeg | mplayer -demuxer lavf -lavfdopts format=mjpeg:probesize=32 -
Example:
ssh [email protected] cat /tmp/results.mjpeg | mplayer -demuxer lavf -lavfdopts format=mjpeg:probesize=32 -
This optimized model will let you access the GPU for running inference. Show a hotdog to your AWS DeepLens and watch its prediction.