Skip to content

Latest commit

 

History

History
92 lines (76 loc) · 5.79 KB

File metadata and controls

92 lines (76 loc) · 5.79 KB

Lab: Watson Visual Recognition with Node-RED

Overview

The Watson Visual Recognition service allows to analyze the contents of an image and produce a series of text classifiers with a confidence index.

Node-RED Watson Visual Recognition node

The Node-RED VisualRecognition node provides a very easy wrapper node that takes an image URL or binary stream as input, and produces a set of image labels as output.

Watson Visual Recognition Flow construction

In this exercise, we will show you how to simply generate the labels from an image URL.

Prerequisites and setup

To get the Visual Recognition service credentials on IBM Cloud automatically filled-in by Node-RED, you should connect the Visual Recognition service to the Node-RED application in IBM Cloud.

Please refer to the Node-RED setup lab for instructions.

Building the flow

The flow will present a simple web page with a text field of where to input the image's URL, then submit it to Watson Visual Recognition. It will output the labels that have been found on the reply Web page. Reco-Lab-VisualRecognitionFlow.png The nodes required to build this flow are:

  • A HTTPInput node, configured with a /reco URL
  • A switch node which will test for the presence of the imageurl query parameter: Reco-Lab-Switch-Node-Props
  • A first template node, configured to output an HTML input field and suggest a few selected images taken from the main Watson Visual Recognition demo web page:
<html>
    <head>
        <title>Watson Visual Recognition on Node-RED</title>
    </head>
    <body>
    <h1>Welcome to the Watson Visual Recognition Demo on Node-RED</h1>
        <h2>Select an image URL</h2>
        <form  action="{{req._parsedUrl.pathname}}">
            <img src="https://raw.githubusercontent.com/watson-developer-cloud/visual-recognition-nodejs/master/public/images/samples/1.jpg" height='100'/>
            <img src="https://raw.githubusercontent.com/watson-developer-cloud/visual-recognition-nodejs/master/public/images/samples/2.jpg" height='100'/>
            <img src="https://raw.githubusercontent.com/watson-developer-cloud/visual-recognition-nodejs/master/public/images/samples/3.jpg" height='100'/>
            <img src="https://raw.githubusercontent.com/watson-developer-cloud/visual-recognition-nodejs/master/public/images/samples/4.jpg" height='100'/>
            <br/>Copy above image location URL or enter any image URL:<br/>
            <input type="text" name="imageurl"/>
            <input type="submit" value="Analyze"/>
        </form>
    </body>
</html>

Reco-Lab-Template1-Node-Props

  • A change node (named Extract img URL here) to extract the imageurl query parameter from the web request and assign it to the payload to be provided as input to the Visual Recognition node: Reco-Lab-Change_and_Reco-Node-Props

  • The Watson Visual Recognition node. Make sure that the credentials are setup from IBM Cloud, i.e. that the service is bound to the application. This can be verified by checking that the properties for the Visual Recognition node are clear:

Visual Recognition node properties

  • And a final template node linked to the HTTPResponse output node. The template will format the output returned from the Visual Recognition node into an HTML table for easier reading:
<html>
    <head><title>Watson Visual Recognition on Node-RED</title></head>
    <body>
        <h1>Node-RED Watson Visual Recognition output</h1>
        <p>Analyzed image: {{payload}}<br/><img src="{{payload}}" height='100'/></p>
        <table border='1'>
            <thead><tr><th>Name</th><th>Score</th></tr></thead>
        {{#result.images.0.classifiers.0.classes}}
        <tr><td><b>{{class}}</b></td><td><i>{{score}}</i></td></tr>
        {{/result.images.0.classifiers.0.classes}}
        </table>
        <form  action="{{req._parsedUrl.pathname}}">
            <input type="submit" value="Try again"/>
        </form>
    </body>
</html>

Reco-Lab-TemplateReport-Node-Props
Note that the HTML snippet above has been simplified and stripped out of non-essential HTML tags, the completed flow solution has a complete HTML page.

Testing the flow

To run the web page, point your browser to /http://xxxx.mybluemix.net/reco and enter the URL of some image. The URL of the pre-selected images can be copied to clipboard and pasted into the text field.

The Watson Visual Recognition API will return an array with the recognized features, which will be formatted in a HTML table by the template:

Visual RecognitionScreenshot

Flow source

The complete flow is available here.

Visual Recognition Documentation

To find more information on the Watson Visual Recognition underlying service, visit these webpages :