Microsoft has published with TENSORFLOW-DIRECTML a version of the Machine Learning Frameworks (ML) initiated by Google-initiated machine-learning frameworks. The Open Source project is a fork of TensorFlow and relies on the otherwise square Library DirecTML. It runs both natively on Win32 as well as on the Windows Subsystem for Linux (WSL).
Virtual device in the backend
The separately available project DirecTML is a Low Level Library, which builds on Direct3D 12 and offers hardware acceleration for ML applications. The library abstracts the plain to the GPU and works together with chips of Nvidia, AMD, Intel and Qualcomm.
TensorFlow-DirectML integrates the DIRECTML backend directly into the ML framework and leads to accelerate "Dml" as a new device that is as an alternative for "GPU" use.
Building blocks of abstraction
DirecTML offers a so-called Device Runtime, which copies with the management of the memory Kummert, Tensors from and to the Host, record GPU commands and synchronizes the workflow between host or CPU and the device.
Runtime offers with DMLDevice an implementation of the TF.Device class in Tensorflow. Dmlkernelwrapper implements the opkernel interface of the ML framework, and Dmlkernel offers a concrete implementation of a TensorFlow operator.
Administration and schedule
For the administration of the DMLkernel instances, the DMLkernelManager is stateful. He places them on a cache to avoid recompiling already existing implementations.
TensorFlow-DirectML offers a Device Runtime, which abstracts the assignment of the ML processes to the hardware.
The DMLAllocator manages the GPU buffer. Finally, the DMLexecutionContext comes with the scheduling of the tasks for the GPU such as the exports of operators and copying memory areas.
Further details can be found in Microsoft’s Cloud Blog. The SourceCode of TensorFlow-DirectML can be found on Github.