OpenVINO - Train and Deploy Neural Network (AI Model) in seconds onto IoT Edge device



Lets look at the challenges being faces by AI developers in training and deploying AI model and how OpenVINO and Azure IoT helps solve the problem

Challenges in training and deploying AI model
  • Choosing a Neural network model
  • Train/re-train till model converge (Costly and time consuming task)
  • Deploying it on Edge device (IoT device's / Laptop / Desktop )

Prerequisites/System requirements - Free Subscription, Software, Hardware and setup:
Solution:

   This Azure Marketplace (deploy) will make total end2end train and deploy onnx model in less than a minute. Application uses docker image built on OpenVINO with ONNX Run-time execution provider (EP)

Details: How it works:

Step 1: Training using customvision.ai in three simple steps 
  • Login to customvision.ai - Upload few train samples (min 25) 
  • Annotate them 
  • Do a quick train
      Ref: Getting started with customvision

Step 2: Deploying OpenVINO AI Vision Module  on to IoT Edge device
  • Click "Get It Now" on  Azure Marketplace  
  • Select device from IoT Hub to deploy 
  • Once deploy is successful - will see "OpenVINOReadyToDeployAIVisionModule" Edge module running
  • Expected output: Camera stream rendering on to display
Step 3: Passing ONNX model to the app with "Twin Updates"
  • 1.Copy "ONNX model URL from" -> customvision.ai -> select project -> Performance -> Export -> Click on ONNX -> Copy ONNX model URL
  • Goto "portal.azure.com" -> IoT Hub -> IoT Edge -> Select device
  • Click on "setmodules" -> click on -> "OpenVINOReadyToDeployAIVisionModule" 
  • Selecting "Twin Module Settings" -> pass "ONNX model URL path selected in step 3.1" to -> inference_files_zip_url (looks like inference_files_zip_url="onnx url path")
  • Finally click "Update" and "Review+Create"
  • Expected output: OpenVINO app will restart the stream and starts running inference based on the ONNX model passed (Object detection/Image classification) (Camera: Should be pointing to object/image of interest to do recognition/classifciation) Note: If no NCS2 connected, inference will start on Intel CPU  

Powered by


Note:

  • Setup is one time process - need some patients to go through cloud setup (if doing it first time) - Happy to answer any question - leave a comment    
  • Deploying (Docker pull) application will take decent time (only once per device) based on network speed
    • Note: Working on making light weight docker



Related Posts

Twitter Updates

Random Posts

share this post
Bookmark and Share
| More
Share/Save/Bookmark Share