Lets look at the challenges being faces by AI developers in training and deploying AI model and how OpenVINO and Azure IoT helps solve the problem
Challenges in training and deploying AI model
- Choosing a Neural network model
- Train/re-train till model converge (Costly and time consuming task)
- Deploying it on Edge device (IoT device's / Laptop / Desktop )
Prerequisites/System requirements - Free Subscription, Software, Hardware and setup:
- Prerequisites
- Subscription:
- Hardware (Any two of the below):
- Intel Powered IoT Device (Ex: UP2 AEEON IOT Edge device)
- Intel Powered Laptop/Desktop with Core i7/i9
- Neural Compute Stick (USB powered) acts as AI accelerator at Edge
- USB Web cam (Ex: Logitech )
- Operating System and Software (pre-installed on Host Machine):
- Setup:
- Host machine should be connected to Azure IoT Hub
- Enable xhost (ubuntu terminal command: $sudo xhost +)
Solution:
This Azure Marketplace (deploy) will make total end2end train and deploy onnx model in less than a minute. Application uses docker image built on OpenVINO with ONNX Run-time execution provider (EP)
Details: How it works:
Step 1: Training using customvision.ai in three simple steps
Details: How it works:
Step 1: Training using customvision.ai in three simple steps
- Login to customvision.ai - Upload few train samples (min 25)
- Annotate them
- Do a quick train
Step 2: Deploying OpenVINO AI Vision Module on to IoT Edge device
- Click "Get It Now" on Azure Marketplace
- Select device from IoT Hub to deploy
- Once deploy is successful - will see "OpenVINOReadyToDeployAIVisionModule" Edge module running
- Expected output: Camera stream rendering on to display
Step 3: Passing ONNX model to the app with "Twin Updates"
- 1.Copy "ONNX model URL from" -> customvision.ai -> select project -> Performance -> Export -> Click on ONNX -> Copy ONNX model URL
- Goto "portal.azure.com" -> IoT Hub -> IoT Edge -> Select device
- Click on "setmodules" -> click on -> "OpenVINOReadyToDeployAIVisionModule"
- Selecting "Twin Module Settings" -> pass "ONNX model URL path selected in step 3.1" to -> inference_files_zip_url (looks like inference_files_zip_url="onnx url path")
- Finally click "Update" and "Review+Create"
- Expected output: OpenVINO app will restart the stream and starts running inference based on the ONNX model passed (Object detection/Image classification) (Camera: Should be pointing to object/image of interest to do recognition/classifciation) Note: If no NCS2 connected, inference will start on Intel CPU
Powered by
Note:
- Setup is one time process - need some patients to go through cloud setup (if doing it first time) - Happy to answer any question - leave a comment
- Deploying (Docker pull) application will take decent time (only once per device) based on network speed
- Note: Working on making light weight docker