DirectX development overview
Windows Mixed Reality applications use the holographic rendering, gaze, gesture, motion controller, voice, and spatial mapping APIs to build mixed reality experiences for HoloLens and immersive headsets. You can create mixed reality applications using a 3D engine, such as Unity, or you can directly code to the Windows Mixed Reality APIs using DirectX 11 or DirectX 12. If you are leveraging the platform directly, you'll essentially be building your own middleware or framework. The Windows APIs support applications written in both C++ and C#. If you choose to use C#, your application can leverage the SharpDX open source software library.
Windows Mixed Reality supports two kinds of apps:
- Mixed reality applications (UWP or Win32) that use the HolographicSpace API to render an immersive view to the user that fills the headset display.
- 2D apps (UWP) that use DirectX, XAML, or other frameworks to render 2D views on slates in the Windows Mixed Reality home.
The differences between DirectX development for 2D views and immersive views are primarily related to holographic rendering and spatial input. Your UWP application's IFrameworkView or your Win32 application's HWND are required, and remain largely the same, as do the WinRT APIs available to your application. However, you must use a different subset of these APIs to take advantage of holographic features. For example, the swap chain is managed by the system for holographic appslications. You work with the HolographicSpace API rather than DXGI to present frames.
To begin developing immersive applications:
- For UWP apps, create a new UWP project using the templates in Visual Studio. Based on your language, Visual C++ or Visual C#, you can find the UWP templates under Windows Universal > Holographic.
- For Win32 applications, start from the BasicHologram Win32 sample.
This is a great way to get the code that you need to add holographic rendering support to an existing application or engine. Code and concepts are presented in the template in a way that's familiar to any developer of real-time interactive software.
The following topics discuss the base requirements of adding Windows Mixed Reality support to DirectX-based middleware:
- Creating a holographic DirectX project: The holographic application template coupled with the documentation shows you the differences between what you're used to as well as the special requirements introduced by a device that's designed to function while resting on your head.
- Getting a HolographicSpace: You'll first need to create a HolographicSpace That will provide your application the sequence of HolographicFrame objects that represent each head position from which you'll render.
- Rendering in DirectX: Since a holographic swap chain has two render targets, you'll need to make some changes to the way your application renders.
- Coordinate systems in DirectX: Windows Mixed Reality learns and updates its understanding of the world as the user walks around. This provides spatial coordinate systems that applications use to reason about the user's surroundings, including spatial anchors and the user's defined spatial stage.
Adding mixed reality capabilities and inputs
To enable the best possible experience for users of your immersive appslication, you'll want to support the following key building blocks:
- Head and eye gaze in DirectX
- Hands and motion controllers in DirectX
- Voice input in DirectX
- Spatial sound in DirectX
- Spatial mapping in DirectX
There are other key features that many immersive applications will want to use that are also exposed to DirectX applicaitons:
- Shared spatial anchors in DirectX
- Locatable camera in DirectX
- Keyboard, mouse, and controller input in DirectX