Windows Machine Learning on LattePanda

1 44989 Medium

Introduce the way to realize Windows machine learning on LattePanda


Things used in this project


Hardware components

DFRobot LattePanda Alpha 864s×1   

13.3 inch 1080p display×1  


Software apps and online services

 Windows 10 Insider Preview OS build 17110 or later  


Hand tools and fabrication machines

 Visual Studio Version 15.6.1 Preview    Windows SDK – Build 17110   



Many have probably seen the recent AI update that Microsoft officially released for Developers on March 7. The LattePanda team, after learning of this news, decided to start a team-project to investigate and see what we can do with it. It was updated on the LattePanda and configured.

This is what Microsoft originally said:“With the next major update to Windows 10, we begin to deliver the advances that have been built into our apps and services as part of the Windows 10 platform. Every developer that builds apps on Windows 10 will be able to use AI to deliver more powerful and engaging experiences.”

Well, most of it was just business as usual so we will pick some key points to talk about. Let us first summarize the positioning, advantages, and disadvantages of the existing release in the overall AI toolchain:

Microsoft will support current and future hardware (including CPUs, and GPUs that support DirectX 12)

Windows ML has a certain degree of openness and compatibility with the existing ecosystem and adopts the industry-standard ONNX (neutral open-source standard = =! to avoid policy inclination issues)

A wide range of device types including desktops, workstations, servers, IoT edge devices and even HoloLens will be supported.

We believe that the Microsoft AI program released this time is somewhat different from some of the official releases that are on GitHub. It feels like an important update; it is being marketed aggressively. After careful evaluation, we decided to give the update a try.

One of the things that caught our attention is the toolchain support for DirectX 12, which means that the toolchain for Windows ML is not only applicable to mainstream NVidia GPUs in the market, but also support AMD’s GPU solutions. This is a bit interesting. In addition, the official release is not expected to promote Intel’s Movidius. Windows ML supports the Movidius 2485 VPU Accelerator. However, the platforms' overall application scenario for the VPU has not yet been clearly understood. Anyone who has a deeper understanding and perspective can tell us their own opinions and suggestions for the community to discuss. In short, we have not tested Movidius’s processing acceleration on LattePanda.

There is a lot to sift through and much to get done, so let’s get started!




Runtime Environment

Windows 10 Insider Preview OS build 17110 or later

Visual Studio Version 15.6.1 Preview

Windows SDK – Build 17110

Of course, to build the preview system and development platform update, it took a lot of time, not just a few hours to update the system…If the system is not updated, it will not compile the related Demo normally so care had to be taken for the setup.








Test hardware platform

LattePanda Alpha 864 (Prototype)

13.3-inch 1080p display (New toy)


Hands-on Demos

In order to thoroughly evaluate the difficulty of using Windows ML, we did not run tests for AI and Machine Learning developers. On the contrary, the test development engineer for this is a hardware expert. We only know so much in AI technology; that neurons are a chaotic, universally 3-4 layer network that can be trained, that is pretty much it. In this way, we want to know the difficulty of the Windows ML toolchain and the depth of the professional background required for anyone who wants to use it.





For a simple start, let us start with the handwritten digital recognition Demo. Maybe by simple numerical or alphabetical structures. This is not anything that can be done by OCR, it has too much noise and distortion for that.

Let us look at the result:


Uh, it looks like it’s not too bad! Let’s try again!


......not as impressive

I think that the gap is too big. Just come up with a pattern that is more distorted, and your results are too poor. When you are doing a Demo, you only need to train your model carefully so that it can recognize the unrecognizable patterns. Displaying “?” is also OK…

The overall feeling, MNIST Demo if you are serious about writing numbers, there is basically no problem with identification. However, if it is a little fancier like cursive, it cannot be identified or will have trouble. Most OCR plays less, so we can’t say if this is a smart technology yet or not.





SqueezeNetObjectDetection Demo

If Machine Learning cannot perform image recognition, it means that the most basic functions are not available, so it is decisive that Microsoft provides a demo for the image classifier. SqueezeNetObjectDetection is a SqueezeNet Image Classifier. It detects the predominant object in an image.

SqueezeNet’s star on GitHub is a reputable TensorFlow implementation. Published a year ago, it appears that this demo’s neuron model has been trained for some time and is quite mature.

Let’s look at the results:


WOW… quite good! Basically, for animals and some common daily objects, it can make accurate judgments and recognition. This demo allows you to get started in a few minutes and identify objects, feeling very useful, and giving us plenty of ideas!

Then the hardware engineer made some boldattempts:


Obviously, appropriate interference is completely acceptable. However, for images that have not been fed or trained or disturbing images as portraits, celebrity photos, etc., the recognition results will mainly focus on some of the details in the images and be mistaken. This is also quite reasonable.





To Sum up

Overall, the cost of Windows ML is very low. It takes half a day to build a good development environment. Even a hardware engineer can run a few demos of neurons on the LattePanda. Alpha happens to support DirectX 12, so iGPU and acceleration of eGPU can be done on Windows ML.

This allows everyone to open up some imaginary space. The next step will be to train a model to recognize members of a whole team, like ours. The official release also provides some model training tutorials. You can also use other neuron model training tutorials, and finally, convert the model’s format to the ONNX standard to import it into the system. You can achieve corresponding results. You can be expected to build a set of office-specific Face ID, face-scanning payment and door-access control is the next step to get started on.

Furthermore, I believe that the existing resources released are believed to be only some of the development tools and materials used by Microsoft to warm up the market. There is reason to believe that in May of this year, a Microsoft Build will have some further tools and materials to help developers base on the latest AI technology to achieve more industry-standard upgrades and utilities.






Check out Windows ML Tutorial

Know more about LattePanda Alpha


All Rights