Since the initial unencumber, Windows ML has powered a lot of Machine Learning (ML) tales on Windows. Delivering unswerving, high-performance results across the breadth of Windows , Windows ML is designed to make ML deployment easier, allowing developers to be aware of rising forefront programs.

Windows ML is built upon ONNX Runtime to provide a simple, model-based, WinRT API optimized for Windows developers. This API permits you to take your ONNX model and seamlessly mix it into your tool to power ML tales. Layered beneath the ONNX Runtime is the DirectML API for cross-vendor acceleration. DirectML is part of the DirectX family and provides whole control for real-time, performance-critical scenarios.

This end-to-end stack provides developers being able to run inferences on any Windows instrument, without reference to the system’s configuration, all from a single and appropriate codebase.

Graphic showing The Windows AI Platform stack.

Figure 1 – The Windows AI Platform stack

Windows ML is used in plenty of real-world tool scenarios. The Windows Photos app uses it to help organize your image collection for an easier and richer browsing experience. The Windows Ink stack uses Windows ML to analyze your handwriting, converting ink strokes into text, shapes, lists and further. Adobe Premier Pro provides a feature that may take your video and crop it to the side ratio of your variety, all while protective the important movement in each and every frame.

With the next unencumber of Windows 10, we are continuing to build on this momentum and are further expanding to fortify additional exciting and unique tales. The interest and engagement from the gang supplied valuable feedback that allowed us to be aware of what our shoppers need most. Today, we are satisfied to percentage with you a couple of of that important feedback and the best way we are regularly running to build from it.

Today, Windows ML is completely supported as a built-in Windows section on Windows 10 mannequin 1809 (October 2018 Update) and newer. Developers can use the corresponding Windows Software Development Kit (SDK) and straight away get started leveraging Windows ML in their tool. For developers that want to continue using this built-in mannequin, we will be able to continue to exchange and innovate Windows ML and get a hold of the feature set and serve as you wish to have with each and every new Windows unencumber.

A now not extraordinary piece of feedback we’ve heard is that developers these days want the power to ship products and programs that have feature parity to all of their shoppers. In other words, developers want to leverage Windows ML on programs all in favour of older permutations of Windows and not merely the newest. To fortify this, we are going to make Windows ML available as a stand-alone package deal that can be shipped along side your tool. This redistributable route lets in Windows ML fortify for CPU inference on Windows permutations 8.1 and newer, and GPU hardware-acceleration on Windows 10 1709 and newer.

Going forward, with each and every new change of Windows ML, there could be a corresponding redist package deal, with matching new choices and optimizations, available on GitHub. Developers will to seek out that with each selection they make a selection, they will download an original Windows offering that is extensively tested, making sure reliability and over the top capability.

In addition to bringing Windows ML fortify to additional permutations of Windows, we are also unifying our method with Windows ML, ONNX Runtime, and DirectML. At the core of this stack, ONNX Runtime is designed to be a cross-platform inference engine. With Windows ML and DirectML, we assemble spherical this runtime to provide a rich set of choices and scaling, designed for Windows and the various ecosystem.

We understand the complexities developers face in building programs that supply a in point of fact best purchaser experience, while moreover attaining their massive purchaser base. In order to provide developers with the best flexibility, we are bringing the Windows ML API and a DirectML execution provider to the ONNX Runtime GitHub venture. Developers can now make a selection the API set that works perfect imaginable for their tool scenarios and however have the advantage of DirectML’s high-performance and dependable acceleration across the breadth of gadgets supported inside the Windows ecosystem.

In GitHub these days, the Windows ML and DirectML preview is available as provide, with instructions and samples on the easiest way to build it, along with a prebuilt NuGet package deal for CPU deployments.

Are you a Windows app developer that wants a nice WinRT API that may mix merely along side your other tool code and is optimized for Windows gadgets? Windows ML is a perfect variety for that. Do you wish to have to build an tool with a single code-path that can artwork all the way through other non-Windows gadgets? The ONNX Runtime cross-platform C API can provide that.


Graphic depicting newly layered Windows AI and ONNX Runtime.

Figure 2 – newly layered Windows AI and ONNX Runtime

Developers already using the ONNX Runtime C-API and who want to check out the DirectML EP (Preview) can follow the ones steps.

We are already making great enlargement on the ones new choices.

You can get get entry to to the preview of Windows ML and DirectML for the ONNX Runtime proper right here. We invite you to sign up for us on GitHub and provide feedback at

The original Windows ML redistributable package deal could be available on NuGet in May 2020.

As always, we very a lot appreciate all of the fortify from the developer staff. We’ll continue to percentage updates as we make additional enlargement with the ones upcoming choices.