Understand the DirectX app template's rendering pipeline

[ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ]

Previously, you looked at how the DirectX app template acquired a window and way to draw into it in Work with the DirectX app template's device resources. Now, you learn how the template builds the graphics pipeline, and where you can hook into it.

You'll recall that there are two Direct3D interfaces that define the graphics pipeline: ID3D11Device, which provides a virtual representation of the GPU and its resources; and ID3D11DeviceContext, which represents the graphics processing for the pipeline. Typically, you use an instance of ID3D11Device to configure and obtain the GPU resources you need to start processing the graphics in a scene, and you use ID3D11DeviceContext to process those resources at each appropriate shader stage in the graphics pipeline. You usually call ID3D11Device methods infrequently—that is, only when you set up a scene or when the device changes. On the other hand, you'll call ID3D11DeviceContext every time you process a frame for display.

The DirectX app template creates and configures a minimal graphics pipeline, suitable for displaying a simple spinning, vertex-shaded cube. It demonstrates approximately the smallest set of resources necessary for display. As you read the info here, note the limitations of the template and where you may have to extend it to support the scene you want to render.

Most of the work is done in two files: DeviceResources.cpp in the root of the project, and Sample3DSceneRenderer.cpp in the \Content directory. This topic focuses specifically on the sample implementation provided in Sample3DSceneRenderer.cpp.

Review the sample renderer

In the template, the graphics pipeline is defined in the Sample3DSceneRenderer.h and Sample3DSceneRenderer.cpp code files. You can replace the Sample3DSceneRenderer class with your own, scene-specific code to:

  • Define constant buffers to store your uniform data.
  • Define vertex buffers to hold your object vertex data, and corresponding index buffers to enable the vertex shader to walk the triangles correctly.
  • Create texture resources and resource views.
  • Load your shader objects.
  • Update the graphics data to display each frame.
  • Draw (render) the graphics to the swap chain.

The first four processes typically use the ID3D11Device interface methods for initializing and managing graphics resources, and the last two use the ID3D11DeviceContext interface methods to manage and execute the graphics pipeline.

An instance of the Simple3DSceneRenderer class is created and managed as a member variable on the main project class. The DeviceResources instance is managed as a shared pointer across several classes, including the main project class, the App view-provider class, and Simple3DSceneRenderer. If you replace Simple3DSceneRenderer with your own class, consider declaring and assigning the DeviceResources instance as a shared pointer member as well:

std::shared_ptr<DX::DeviceResources> m_deviceResources;

Just pass the pointer into the class constructor (or other initialization method) after the DeviceResources instance is created in the Initialize method of the App class. You can also pass a weak_ptr reference if instead you want your main class to completely own the DeviceResources instance.

Create your own renderer

The easiest way to start to create your own renderer is to copy the declaration and layouts of the Simple3DSceneRenderer class, and use a similar set of methods. Specifically, those methods are:

  • CreateDeviceDependentResources: Called whenever the scene must be initialized or restarted. Use this method to load your initial vertex data, textures, shaders, and other resources, and to construct the initial constant and vertex buffers. Typically, most of the work here is done with ID3D11Device methods, not ID3D11DeviceContext methods.
  • CreateWindowSizeDependentResources: Called whenever the window state changes, such as when resizing occurs or when orientation changes. Use this method to rebuild transform matrices, such as those for your camera.
  • ReleaseDeviceDependentResources: Called whenever the current graphics device is lost or changes. Cleans up the scene's Direct3D resources in preparation for scene reinitialization or app closure.
  • Update: Called from the Update method on the main project class. Have this method read from any game-state information that affects rendering, such as updates to object position or animation frames, plus any global game data like light levels or changes to game physics. These inputs are used to update the per-frame constant buffers and object data.
  • Render: Called from the Render method on the main project class. This method constructs the graphics pipeline, binds the buffers and resources, and invokes the shader stages for the current frame.

These methods comprise the body of behaviors for rendering a scene with Direct3D using your assets. If you create a new class similar to this one, declare it on the main project class. So this:

std::unique_ptr<Sample3DSceneRenderer> m_sceneRenderer;

becomes this:

std::unique_ptr<MyAwesomeNewSceneRenderer> m_sceneRenderer;

Again, note that this example assumes that the methods have the same signatures in your implementation. If the signatures have changed, review the App class's methods, especially the Render and Update methods, and make the changes accordingly.

Let's look over the template's scene-rendering methods in more detail.

Implement the CreateDeviceDependentResources method

CreateDeviceDependentResources consolidates all the operations for initializing the scene and its resources using ID3D11Device calls. This method assumes that the Direct3D device has just been initialized (or has been recreated) for a scene. So it recreates or reloads all scene-specific graphics resources, such as the vertex and pixel shaders, the vertex and index buffers for objects, and any other resources (for example, as textures and their corresponding views).

Here's the code for CreateDeviceDependentResources.

void Sample3DSceneRenderer::CreateDeviceDependentResources()
{
    // Load shaders asynchronously.
    auto loadVSTask = DX::ReadDataAsync(L"SampleVertexShader.cso");
    auto loadPSTask = DX::ReadDataAsync(L"SamplePixelShader.cso");

    // After the vertex shader file is loaded, create the shader and input layout.
    auto createVSTask = loadVSTask.then([this](const std::vector<byte>& fileData) {
        DX::ThrowIfFailed(
            m_deviceResources->GetD3DDevice()->CreateVertexShader(
                &fileData[0],
                fileData.size(),
                nullptr,
                &m_vertexShader
                )
            );

        static const D3D11_INPUT_ELEMENT_DESC vertexDesc [] =
        {
            { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
            { "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
        };

        DX::ThrowIfFailed(
            m_deviceResources->GetD3DDevice()->CreateInputLayout(
                vertexDesc,
                ARRAYSIZE(vertexDesc),
                &fileData[0],
                fileData.size(),
                &m_inputLayout
                )
            );
    });

    // After the pixel shader file is loaded, create the shader and constant buffer.
    auto createPSTask = loadPSTask.then([this](const std::vector<byte>& fileData) {
        DX::ThrowIfFailed(
            m_deviceResources->GetD3DDevice()->CreatePixelShader(
                &fileData[0],
                fileData.size(),
                nullptr,
                &m_pixelShader
                )
            );

        CD3D11_BUFFER_DESC constantBufferDesc(sizeof(ModelViewProjectionConstantBuffer) , D3D11_BIND_CONSTANT_BUFFER);
        DX::ThrowIfFailed(
            m_deviceResources->GetD3DDevice()->CreateBuffer(
                &constantBufferDesc,
                nullptr,
                &m_constantBuffer
                )
            );
    });

    // Once both shaders are loaded, create the mesh.
    auto createCubeTask = (createPSTask && createVSTask).then([this] () {

        // Load mesh vertices. Each vertex has a position and a color.
        static const VertexPositionColor cubeVertices[] = 
        {
            {XMFLOAT3(-0.5f, -0.5f, -0.5f), XMFLOAT3(0.0f, 0.0f, 0.0f)},
            {XMFLOAT3(-0.5f, -0.5f,  0.5f), XMFLOAT3(0.0f, 0.0f, 1.0f)},
            {XMFLOAT3(-0.5f,  0.5f, -0.5f), XMFLOAT3(0.0f, 1.0f, 0.0f)},
            {XMFLOAT3(-0.5f,  0.5f,  0.5f), XMFLOAT3(0.0f, 1.0f, 1.0f)},
            {XMFLOAT3( 0.5f, -0.5f, -0.5f), XMFLOAT3(1.0f, 0.0f, 0.0f)},
            {XMFLOAT3( 0.5f, -0.5f,  0.5f), XMFLOAT3(1.0f, 0.0f, 1.0f)},
            {XMFLOAT3( 0.5f,  0.5f, -0.5f), XMFLOAT3(1.0f, 1.0f, 0.0f)},
            {XMFLOAT3( 0.5f,  0.5f,  0.5f), XMFLOAT3(1.0f, 1.0f, 1.0f)},
        };

        D3D11_SUBRESOURCE_DATA vertexBufferData = {0};
        vertexBufferData.pSysMem = cubeVertices;
        vertexBufferData.SysMemPitch = 0;
        vertexBufferData.SysMemSlicePitch = 0;
        CD3D11_BUFFER_DESC vertexBufferDesc(sizeof(cubeVertices), D3D11_BIND_VERTEX_BUFFER);
        DX::ThrowIfFailed(
            m_deviceResources->GetD3DDevice()->CreateBuffer(
                &vertexBufferDesc,
                &vertexBufferData,
                &m_vertexBuffer
                )
            );

        // Load mesh indices. Each trio of indices represents
        // a triangle to be rendered on the screen.
        // For example: 0,2,1 means that the vertices with indexes
        // 0, 2 and 1 from the vertex buffer compose the 
        // first triangle of this mesh.
        static const unsigned short cubeIndices [] =
        {
            0,2,1, // -x
            1,2,3,

            4,5,6, // +x
            5,7,6,

            0,1,5, // -y
            0,5,4,

            2,6,7, // +y
            2,7,3,

            0,4,6, // -z
            0,6,2,

            1,3,7, // +z
            1,7,5,
        };

        m_indexCount = ARRAYSIZE(cubeIndices);

        D3D11_SUBRESOURCE_DATA indexBufferData = {0};
        indexBufferData.pSysMem = cubeIndices;
        indexBufferData.SysMemPitch = 0;
        indexBufferData.SysMemSlicePitch = 0;
        CD3D11_BUFFER_DESC indexBufferDesc(sizeof(cubeIndices), D3D11_BIND_INDEX_BUFFER);
        DX::ThrowIfFailed(
            m_deviceResources->GetD3DDevice()->CreateBuffer(
                &indexBufferDesc,
                &indexBufferData,
                &m_indexBuffer
                )
            );
    });

    // Once the cube is loaded, the object is ready to be rendered.
    createCubeTask.then([this] () {
        m_loadingComplete = true;
    });
}

Any time you load resources from disk—resources like compiled shader object (CSO, or .cso) files or textures—do so asynchronously. The lambda syntax used for the asynchronous tasks in the template are a good model for this, because they enable you to perform other setup tasks or display something visually interesting to the user instead of blocking with a black screen or static content.

The template does not load any meshes or textures. You must create the methods for loading the mesh and texture types that are specific to your game, and you can also call them asynchronously from this method. The template provides a very simple asynchronous file reader in Common\DirectXHelper.h. If you use it to read asset data, you must still interpret those assets in your own code.

Populate your per-scene constant buffers here too. Examples of per-scene constant buffer include fixed lights or other static scene elements and data.

Implement the CreateWindowSizeDependentResources method

CreateWindowSizeDependentResources is called from the method of the same name on your instance of the main project class every time the window size, orientation, or resolution changes. The main project class's implementation of CreateWindowSizeDependentResources is originally called, then, from event handlers registered on the App view provider class, and which are invoked when, as expected, window size, orientation, and DPI change.

In short, the view provider gets a window change event, which calls CreateWindowSizeDependentResources on the main project class instance, which then calls the CreateWindowSizeDependentResources implementation on the scene renderer class.

The primary job of this method is to make sure the visuals don't become confused or invalid because of a change in window properties. In the template, this means updating the project matrices with a new field of view (FOV) for the resized or reoriented window.

// Initializes view parameters when the window size changes.
void Sample3DSceneRenderer::CreateWindowSizeDependentResources()
{
    Size outputSize = m_deviceResources->GetOutputSize();
    float aspectRatio = outputSize.Width / outputSize.Height;
    float fovAngleY = 70.0f * XM_PI / 180.0f;

    // This is a simple example of change that can be made when the app is in
    // a differently oriented app view (window), like portrait orientation.
    if (aspectRatio < 1.0f)
    {
        fovAngleY *= 2.0f;
    }

    // Note that the OrientationTransform3D matrix is post-multiplied here
    // in order to correctly orient the scene to match the display orientation.
    // This post-multiplication step is required for any draw calls that are
    // made to the swap-chain render target. For draw calls to other targets,
    // do not apply this transform.

    // This sample uses a right-handed coordinate system using row-major matrices.
    XMMATRIX perspectiveMatrix = XMMatrixPerspectiveFovRH(
        fovAngleY,
        aspectRatio,
        0.01f,
        100.0f
        );

    XMFLOAT4X4 orientation = m_deviceResources->GetOrientationTransform3D();

    XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);

    XMStoreFloat4x4(
        &m_constantBufferData.projection,
        XMMatrixTranspose(perspectiveMatrix * orientationMatrix)
        );

    // Eye is at (0,0.7,1.5), looking at point (0,-0.1,0) with the up-vector along the y-axis.
    static const XMVECTORF32 eye = { 0.0f, 0.7f, 1.5f, 0.0f };
    static const XMVECTORF32 at = { 0.0f, -0.1f, 0.0f, 0.0f };
    static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };

    XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up)));
}

If your scene has a specific layout of 3D components that depends on the aspect ratio, this is the place to rearrange them to match that aspect ratio. You may want to change the configuration of post-processing behavior here also.

Implement the Update method

The Update method is called from the main project class's method of the same name. In the template, it has a simple purpose: rotate the cube based on the current timer value. In a real game scene, this method contains lot more code, checking game state and updating the per-frame (or other dynamic) constant buffers, geometry buffers, and other in-memory assets accordingly.

// Called once per frame, rotates the cube and calculates the model and view matrices.
void Sample3DSceneRenderer::Update(DX::StepTimer const& timer)
{
    if (!m_tracking)
    {
        // Convert degrees to radians, then convert seconds to rotation angle
        float radiansPerSecond = XMConvertToRadians(m_degreesPerSecond);
        double totalRotation = timer.GetTotalSeconds() * radiansPerSecond;
        float radians = static_cast<float>(fmod(totalRotation, XM_2PI));

        Rotate(radians);
    }
}

In this case, Rotate updates the constant buffer with a new transformation matrix for the cube. The matrix will be multiplied per-vertex during the vertex shader stage. Since this method is called with every frame, this is a good place to aggregate any methods that update your dynamic constant and vertex buffers, or to perform any other operations that prepare the objects in the scene for transformation by the graphics pipeline.

Implement the Render method

LikeUpdate, the Render method is also called from the main project class's method of the same name. This is the method where the graphics pipeline is constructed and processed for the frame using methods on the ID3D11DeviceContext instance. This culminates in a final call to ID3D11DeviceContext::DrawIndexed. It’s important to understand that this call (or other similar Draw* calls defined onID3D11DeviceContext) actually executes the pipeline. Specifically, this is when Direct3D communicates with the GPU to set drawing state, runs each pipeline stage, and writes the pixel results into the render-target buffer resource for display by the swap chain. Since communication between the CPU and GPU incurs overhead, combine multiple draw calls into a single one if you can, especially if your scene has a lot of rendered objects.

// Renders one frame using the vertex and pixel shaders.
void Sample3DSceneRenderer::Render()
{
    // Loading is asynchronous. Draw geometry only after it's loaded.
    if (!m_loadingComplete)
    {
        return;
    }

    auto context = m_deviceResources->GetD3DDeviceContext();

    // Set render targets to the screen.
    ID3D11RenderTargetView *const targets[1] = { m_deviceResources->GetBackBufferRenderTargetView() };
    context->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());

    // Prepare the constant buffer to send it to the graphics device.
    context->UpdateSubresource(
        m_constantBuffer.Get(),
        0,
        NULL,
        &m_constantBufferData,
        0,
        0
        );

    // Each vertex is one instance of the VertexPositionColor struct.
    UINT stride = sizeof(VertexPositionColor);
    UINT offset = 0;
    context->IASetVertexBuffers(
        0,
        1,
        m_vertexBuffer.GetAddressOf(),
        &stride,
        &offset
        );

    context->IASetIndexBuffer(
        m_indexBuffer.Get(),
        DXGI_FORMAT_R16_UINT, // Each index is one 16-bit unsigned integer (short).
        0
        );

    context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

    context->IASetInputLayout(m_inputLayout.Get());

    // Attach our vertex shader.
    context->VSSetShader(
        m_vertexShader.Get(),
        nullptr,
        0
        );

    // Send the constant buffer to the graphics device.
    context->VSSetConstantBuffers(
        0,
        1,
        m_constantBuffer.GetAddressOf()
        );

    // Attach our pixel shader.
    context->PSSetShader(
        m_pixelShader.Get(),
        nullptr,
        0
        );

    // Draw the objects.
    context->DrawIndexed(
        m_indexCount,
        0,
        0
        );
}

It's good practice to set the various graphics pipeline stages on the context in order. Typically, the order is:

  • Set the updated constant buffer resources that were refreshed during the call to Update.
  • Input assembly (IA): This is where we attach the vertex and index buffers that define the scene geometry. You need to attach each vertex and index buffer for each object in the scene. Because the template has just the cube, it's pretty simple.
  • Vertex shader (VS): Attach any vertex shaders that will transform the data in the vertex buffers.
  • Pixel shader (PS): Attach any pixel shaders that will perform per-pixel operations in the rasterized scene.
  • Output merger (OM): This is the stage where pixels are blended, after the shaders are finished.. Minimally, attach your depth stencils and render targets. You may have multiple stencils and targets if you have additional vertex and pixel shaders that generate textures such as shadow maps, height maps, or other sampling techniques.

Next, in the final section (Work with shaders and shader resources), we'll look at the shaders and discuss how Direct3D executes them.

Use the Visual Studio 2013 DirectX templates

Work with the DirectX app template's device resources

Work with shaders and shader resources