The Custom Vision Service is a Microsoft Cognitive Service that lets you build custom image classifiers. It makes it very easy and fast to build, deploy, and improve an image classifier. Since May 7, we are able to export tensorflow model to Dockerfile in custom vision service, that means we can download the artifacts to build our own windows or Linux containers, and furthermore, deploy it to Iot Edge. Notice that Custom Vision Service only support export in compact domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices and classifiers built with a compact domain may be slightly less accurate than a standard domain with the same amount of training data.
After uploading the images and finishing the training, we can find the Export button in PERFORMANCE page and make sure export as DockerFile. The exported artifact is actually a fully functional http server which is listening on 80 port. In order to use it in Iot Edge, the easiest way is keep it as a http server and call the web service from other custom module shown below.
In order to export the http server to other modules, we need make port mapping in "Container Create Options" shown below.
The below code has exported the http server on port 8085 and the other modules will be able to access it via 8085. For all the other available options ,please refer to https://docs.docker.com/engine/api/v1.30/#operation/ContainerCreate
{ "HostConfig": { "PortBindings": { "80/tcp": [ { "HostPort": "8085" } ] } } }
We also need to create a custom module to call the AI module, this can be done either by cookiecutter, or from visual studio code. Make sure "Azure Iot Toolkit" has been installed to Visual studio code, we will be able to "New IoT Edge Solution" via Command Palette shown below.
Certainly, we can modify the exported AI module to a custom module as well, in that case the module will need take care of message input and output as well.