轉譯架構 I:轉譯簡介Rendering framework I: Intro to rendering

注意

本主題是使用 DirectX 教學課程系列建立簡單的通用 Windows 平臺(UWP)遊戲的一部分。This topic is part of the Create a simple Universal Windows Platform (UWP) game with DirectX tutorial series. 該連結的主題會設定數列的內容。The topic at that link sets the context for the series.

到目前為止,我們已經討論過如何建立通用 Windows 平臺(UWP)遊戲的結構,以及如何定義狀態機器來處理遊戲的流程。So far we've covered how to structure a Universal Windows Platform (UWP) game, and how to define a state machine to handle the flow of the game. 現在正是學習如何開發轉譯架構的時候了。Now it's time to learn how to develop the rendering framework. 我們來看一下範例遊戲如何使用 Direct3D 11 來呈現遊戲場景。Let's look at how the sample game renders the game scene using Direct3D 11.

Direct3D 11 包含一組 Api,可讓您存取高效能圖形硬體的先進功能,以用來為需要大量圖形的應用程式(例如遊戲)建立3D 圖形。Direct3D 11 contains a set of APIs that provide access to the advanced features of high-performance graphic hardware that can be used to create 3D graphics for graphics-intensive applications such as games.

在螢幕上轉譯遊戲圖形表示基本上會在螢幕上呈現一系列畫面格。Rendering game graphics on-screen means basically rendering a sequence of frames on-screen. 每個畫面中,您必須根據檢視轉譯螢幕中可看到的物件。In each frame, you have to render objects that are visible in the scene, based on the view.

為了轉譯畫面,您必須將所需的場景資訊傳遞到硬體,讓它可以顯示在螢幕上。In order to render a frame, you have to pass the required scene information to the hardware so that it can be displayed on the screen. 如果您希望在螢幕上顯示任何項目,則需要在遊戲開始執行時立即開始轉譯。If you want to have anything displayed on screen, you need to start rendering as soon as the game starts running.

目標Objectives

設定基本轉譯架構,以顯示 UWP DirectX 遊戲的圖形輸出。To set up a basic rendering framework to display the graphics output for a UWP DirectX game. 您可以將它鬆散細分成這三個步驟。You can loosely break that down into these three steps.

  1. 建立圖形介面的連接。Establish a connection to the graphics interface.
  2. 建立繪製圖形所需的資源。Create the resources needed to draw the graphics.
  3. 藉由呈現畫面格來顯示圖形。Display the graphics by rendering the frame.

本主題說明如何呈現圖形,涵蓋步驟1和3。This topic explains how graphics are rendered, covering steps 1 and 3.

轉譯架構 II:遊戲轉譯涵蓋步驟 2 — 如何設定轉譯架構,以及如何在可能發生轉譯之前準備資料。Rendering framework II: Game rendering covers step 2—how to set up the rendering framework, and how data is prepared before rendering can happen.

開始使用Get started

讓自己熟悉基本的圖形和轉譯概念是個不錯的主意。It's a good idea to familiarize yourself with basic graphics and rendering concepts. 如果您不熟悉 Direct3D 和轉譯,請參閱術語和概念,以取得本主題所使用之圖形和轉譯詞彙的簡短描述。If you're new to Direct3D and rendering, see Terms and concepts for a brief description of the graphics and rendering terms used in this topic.

針對此遊戲, GameRenderer類別代表此範例遊戲的轉譯器。For this game, the GameRenderer class represents the renderer for this sample game. 其負責建立及維護所有用於產生遊戲視覺效果之 Direct3D 11 和 Direct2D 物件。It's responsible for creating and maintaining all the Direct3D 11 and Direct2D objects used to generate the game visuals. 它也會維護用來抓取要轉譯之物件清單的Simple3DGame物件的參考,以及用於列印頭顯示(抬頭顯示器)遊戲的狀態。It also maintains a reference to the Simple3DGame object used to retrieve the list of objects to render, as well as status of the game for the heads-up display (HUD).

教學課程的此部分中,我們會著重在轉譯遊戲中的 3D 物件。In this part of the tutorial, we'll focus on rendering 3D objects in the game.

建立連接到此圖形介面Establish a connection to the graphics interface

如需存取硬體以進行轉譯的詳細資訊,請參閱定義遊戲的 UWP 應用程式架構主題。For info about accessing the hardware for rendering, see the Define the game's UWP app framework topic.

App:: Initialize 方法The App::Initialize method

Std:: make_shared函式(如下所示)用來建立shared_ptrDX::D eviceresources,這也會提供裝置的存取權。The std::make_shared function, as shown below, is used to create a shared_ptr to DX::DeviceResources, which also provides access to the device.

Direct3D 11 中,裝置用於配置與摧毀物件、轉譯基本類型,並透過圖形驅動程式與圖形卡通訊。In Direct3D 11, a device is used to allocate and destroy objects, render primitives, and communicate with the graphics card through the graphics driver.

void Initialize(CoreApplicationView const& applicationView)
{
    ...

    // At this point we have access to the device. 
    // We can create the device-dependent resources.
    m_deviceResources = std::make_shared<DX::DeviceResources>();
}

轉譯畫面來顯示圖形Display the graphics by rendering the frame

啟動遊戲時,遊戲場景需要轉譯。The game scene needs to render when the game is launched. GameMain:: Run方法中開始轉譯的指示,如下所示。The instructions for rendering start in the GameMain::Run method, as shown below.

簡單的流程就是這樣。The simple flow is this.

  1. 更新Update
  2. 轉譯Render
  3. 目前Present

GameMain::Run 方法GameMain::Run method

void GameMain::Run()
{
    while (!m_windowClosed)
    {
        if (m_visible) // if the window is visible
        {
            switch (m_updateState)
            {
            ...
            default:
                CoreWindow::GetForCurrentThread().Dispatcher().ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);
                Update();
                m_renderer->Render();
                m_deviceResources->Present();
                m_renderNeeded = false;
            }
        }
        else
        {
            CoreWindow::GetForCurrentThread().Dispatcher().ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
        }
    }
    m_game->OnSuspending();  // Exiting due to window close, so save state.
}

更新Update

如需有關如何在GameMain:: Update方法中更新遊戲狀態的詳細資訊,請參閱遊戲流程管理主題。See the Game flow management topic for more information about how game states are updated in the GameMain::Update method.

轉譯Render

轉譯的執行方式是從GameMain:: Run呼叫GameRenderer:: Render方法。Rendering is implemented by calling the GameRenderer::Render method from GameMain::Run.

如果已啟用身歷聲轉譯,則會有兩個轉譯傳遞 — 一個用於左眼,另一個用於右邊。If stereo rendering is enabled, then there are two rendering passes—one for the left eye and one for the right. 在每個轉譯階段中,我們將轉譯目標與 深度樣板檢視 繫結至裝置。In each rendering pass, we bind the render target and the depth-stencil view to the device. 我們稍後也會說明深度樣板檢視。We also clear the depth-stencil view afterward.

注意

可以使用其他方法,例如使用端點執行個體或幾何著色器的一階段,來達成立體著色運算。Stereo rendering can be achieved using other methods such as single pass stereo using vertex instancing or geometry shaders. 兩個轉譯傳遞方法是較慢但更方便的方式來達到身歷聲轉譯。The two-rendering-passes method is a slower but more convenient way to achieve stereo rendering.

一旦遊戲執行,並載入資源之後,我們會針對每個轉譯階段更新投影矩陣一次。Once the game is running, and resources are loaded, we update the projection matrix, once per rendering pass. 物件與每個檢視有些許不同。Objects are slightly different from each view. 接下來,我們會設定圖形轉譯管線Next, we set up the graphics rendering pipeline.

注意

請參閱建立與載入 DirectX 圖形資源,獲得如何載入資源的詳細資訊。See Create and load DirectX graphic resources for more information on how resources are loaded.

在這個範例遊戲中,轉譯器是設計成在所有物件上使用標準的頂點配置。In this sample game, the renderer is designed to use a standard vertex layout across all objects. 這會簡化著色器設計,並允許在著色器之間輕鬆變更,與物件的幾何無關。This simplifies the shader design, and allows for easy changes between shaders, independent of the objects' geometry.

GameRenderer::Render 方法GameRenderer::Render method

我們會將 Direct3D 內容設定為使用輸入頂點版面配置。We set the Direct3D context to use an input vertex layout. 輸入配置物件描述如何將頂點緩衝區資料傳送到轉譯管線Input-layout objects describe how vertex buffer data is streamed into the rendering pipeline.

接下來,我們將 Direct3D 內容設定為使用稍早定義的常數緩衝區,這是由頂點著色器管線階段和圖元著色器管線階段所使用。Next, we set the Direct3D context to use the constant buffers defined earlier, which are used by the vertex shader pipeline stage and the pixel shader pipeline stage.

注意

請參閱轉譯架構 II:遊戲轉譯獲得有關常數緩衝區定義的詳細資訊。See Rendering framework II: Game rendering for more information about definition of the constant buffers.

因為相同輸入配置與常數緩衝區設定適用於在管線裡的所有著色器,所以每個畫面設定一次。Because the same input layout and set of constant buffers is used for all shaders that are in the pipeline, it's set up once per frame.

void GameRenderer::Render()
{
    bool stereoEnabled{ m_deviceResources->GetStereoState() };

    auto d3dContext{ m_deviceResources->GetD3DDeviceContext() };
    auto d2dContext{ m_deviceResources->GetD2DDeviceContext() };

    int renderingPasses = 1;
    if (stereoEnabled)
    {
        renderingPasses = 2;
    }

    for (int i = 0; i < renderingPasses; i++)
    {
        // Iterate through the number of rendering passes to be completed.
        // 2 rendering passes if stereo is enabled.
        if (i > 0)
        {
            // Doing the Right Eye View.
            ID3D11RenderTargetView* const targets[1] = { m_deviceResources->GetBackBufferRenderTargetViewRight() };

            // Resets render targets to the screen.
            // OMSetRenderTargets binds 2 things to the device.
            // 1. Binds one render target atomically to the device.
            // 2. Binds the depth-stencil view, as returned by the GetDepthStencilView method, to the device.
            // For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-omsetrendertargets

            d3dContext->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());

            // Clears the depth stencil view.
            // A depth stencil view contains the format and buffer to hold depth and stencil info.
            // For more info about depth stencil view, go to: 
            // https://docs.microsoft.com/windows/uwp/graphics-concepts/depth-stencil-view--dsv-
            // A depth buffer is used to store depth information to control which areas of 
            // polygons are rendered rather than hidden from view. To learn more about a depth buffer,
            // go to: https://docs.microsoft.com/windows/uwp/graphics-concepts/depth-buffers
            // A stencil buffer is used to mask pixels in an image, to produce special effects. 
            // The mask determines whether a pixel is drawn or not,
            // by setting the bit to a 1 or 0. To learn more about a stencil buffer,
            // go to: https://docs.microsoft.com/windows/uwp/graphics-concepts/stencil-buffers

            d3dContext->ClearDepthStencilView(m_deviceResources->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);

            // Direct2D -- discussed later
            d2dContext->SetTarget(m_deviceResources->GetD2DTargetBitmapRight());
        }
        else
        {
            // Doing the Mono or Left Eye View.
            // As compared to the right eye:
            // m_deviceResources->GetBackBufferRenderTargetView instead of GetBackBufferRenderTargetViewRight
            ID3D11RenderTargetView* const targets[1] = { m_deviceResources->GetBackBufferRenderTargetView() };

            // Same as the Right Eye View.
            d3dContext->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());
            d3dContext->ClearDepthStencilView(m_deviceResources->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);

            // d2d -- Discussed later under Adding UI
            d2dContext->SetTarget(m_deviceResources->GetD2DTargetBitmap());
        }

        const float clearColor[4] = { 0.5f, 0.5f, 0.8f, 1.0f };

        // Only need to clear the background when not rendering the full 3D scene since
        // the 3D world is a fully enclosed box and the dynamics prevents the camera from
        // moving outside this space.
        if (i > 0)
        {
            // Doing the Right Eye View.
            d3dContext->ClearRenderTargetView(m_deviceResources->GetBackBufferRenderTargetViewRight(), clearColor);
        }
        else
        {
            // Doing the Mono or Left Eye View.
            d3dContext->ClearRenderTargetView(m_deviceResources->GetBackBufferRenderTargetView(), clearColor);
        }

        // Render the scene objects
        if (m_game != nullptr && m_gameResourcesLoaded && m_levelResourcesLoaded)
        {
            // This section is only used after the game state has been initialized and all device
            // resources needed for the game have been created and associated with the game objects.
            if (stereoEnabled)
            {
                // When doing stereo, it is necessary to update the projection matrix once per rendering pass.

                auto orientation = m_deviceResources->GetOrientationTransform3D();

                ConstantBufferChangeOnResize changesOnResize;
                // Apply either a left or right eye projection, which is an offset from the middle
                XMStoreFloat4x4(
                    &changesOnResize.projection,
                    XMMatrixMultiply(
                        XMMatrixTranspose(
                            i == 0 ?
                            m_game->GameCamera().LeftEyeProjection() :
                            m_game->GameCamera().RightEyeProjection()
                            ),
                        XMMatrixTranspose(XMLoadFloat4x4(&orientation))
                        )
                    );

                d3dContext->UpdateSubresource(
                    m_constantBufferChangeOnResize.get(),
                    0,
                    nullptr,
                    &changesOnResize,
                    0,
                    0
                    );
            }

            // Update variables that change once per frame.
            ConstantBufferChangesEveryFrame constantBufferChangesEveryFrameValue;
            XMStoreFloat4x4(
                &constantBufferChangesEveryFrameValue.view,
                XMMatrixTranspose(m_game->GameCamera().View())
                );
            d3dContext->UpdateSubresource(
                m_constantBufferChangesEveryFrame.get(),
                0,
                nullptr,
                &constantBufferChangesEveryFrameValue,
                0,
                0
                );

            // Set up the graphics pipeline. This sample uses the same InputLayout and set of
            // constant buffers for all shaders, so they only need to be set once per frame.
            // For more info about the graphics or rendering pipeline, see
            // https://docs.microsoft.com/windows/win32/direct3d11/overviews-direct3d-11-graphics-pipeline

            // IASetInputLayout binds an input-layout object to the input-assembler (IA) stage. 
            // Input-layout objects describe how vertex buffer data is streamed into the IA pipeline stage.
            // Set up the Direct3D context to use this vertex layout. For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-iasetinputlayout
            d3dContext->IASetInputLayout(m_vertexLayout.get());

            // VSSetConstantBuffers sets the constant buffers used by the vertex shader pipeline stage.
            // Set up the Direct3D context to use these constant buffers. For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-vssetconstantbuffers

            ID3D11Buffer* constantBufferNeverChanges{ m_constantBufferNeverChanges.get() };
            d3dContext->VSSetConstantBuffers(0, 1, &constantBufferNeverChanges);
            ID3D11Buffer* constantBufferChangeOnResize{ m_constantBufferChangeOnResize.get() };
            d3dContext->VSSetConstantBuffers(1, 1, &constantBufferChangeOnResize);
            ID3D11Buffer* constantBufferChangesEveryFrame{ m_constantBufferChangesEveryFrame.get() };
            d3dContext->VSSetConstantBuffers(2, 1, &constantBufferChangesEveryFrame);
            ID3D11Buffer* constantBufferChangesEveryPrim{ m_constantBufferChangesEveryPrim.get() };
            d3dContext->VSSetConstantBuffers(3, 1, &constantBufferChangesEveryPrim);

            // Sets the constant buffers used by the pixel shader pipeline stage. 
            // For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-pssetconstantbuffers

            d3dContext->PSSetConstantBuffers(2, 1, &constantBufferChangesEveryFrame);
            d3dContext->PSSetConstantBuffers(3, 1, &constantBufferChangesEveryPrim);
            ID3D11SamplerState* samplerLinear{ m_samplerLinear.get() };
            d3dContext->PSSetSamplers(0, 1, &samplerLinear);

            for (auto&& object : m_game->RenderObjects())
            {
                // The 3D object render method handles the rendering.
                // For more info, see Primitive rendering below.
                object->Render(d3dContext, m_constantBufferChangesEveryPrim.get());
            }
        }

        // Start of 2D rendering
        ...
    }
}

基本類型轉譯Primitive rendering

轉譯場景時,您逐一迴圈的所有物件,需經過轉譯。When rendering the scene, you loop through all the objects that need to be rendered. 每個物件 (基本類型) 皆重複以下的步驟。The steps below are repeated for each object (primitive).

  • 使用模型的「世界轉換矩陣」和「材質」資訊來更新常數緩衝區(m_constantBufferChangesEveryPrim)。Update the constant buffer (m_constantBufferChangesEveryPrim) with the model's world transformation matrix and material information.
  • M_constantBufferChangesEveryPrim包含每個物件的參數。The m_constantBufferChangesEveryPrim contains parameters for each object. 其中包含物件對世界的轉換矩陣,以及材質屬性(例如光源計算的色彩和反射指數)。It includes the object-to-world transformation matrix as well as material properties such as color and specular exponent for lighting calculations.
  • 將 Direct3D 內容設定為使用網格物件資料的輸入頂點配置,以串流至轉譯管線的輸入組合器(IA)階段。Set the Direct3D context to use the input vertex layout for the mesh object data to be streamed into the input-assembler (IA) stage of the rendering pipeline.
  • 將 Direct3D 內容設定為使用 IA 階段中的索引緩衝區Set the Direct3D context to use an index buffer in the IA stage. 提供基本類型資訊:類型、資料訂單。Provide the primitive info: type, data order.
  • 送出繪製呼叫,繪製已編製索引的非執行個體基本類型。Submit a draw call to draw the indexed, non-instanced primitive. The GameObject::Render 方法使用指定基本類型的特定資料來更新基本類型 常數緩衝區The GameObject::Render method updates the primitive constant buffer with the data specific to a given primitive. 這會讓內容的 DrawIndexed 呼叫,繪製每個基本類型的幾何。This results in a DrawIndexed call on the context to draw the geometry of that each primitive. 具體來說,這個 Draw 呼叫會將命令和資料排到圖形處理單元 GPU 的佇列,由常數緩衝區資料進行參數化處理。Specifically, this draw call queues commands and data to the graphics processing unit (GPU), as parameterized by the constant buffer data. 每個繪圖呼叫會在每個頂點上執行頂點著色器 一次,然後像素著色器 會在基本類型中每個三角形的每個像素中執行一次。Each draw call executes the vertex shader one time per vertex, and then the pixel shader one time for every pixel of each triangle in the primitive. 紋理是像素著色器用來進行轉譯之狀態的一部分。The textures are part of the state that the pixel shader uses to do the rendering.

以下是使用多個常數緩衝區的原因。Here are the reasons for using multiple constant buffers.

  • 遊戲會使用多個常數緩衝區,但它只需要每個基本的一次更新這些緩衝區。The game uses multiple constant buffers, but it only needs to update these buffers one time per primitive. 如先前所述,常數緩衝區就像是為每個基本類型執行的著色器輸入。As mentioned earlier, constant buffers are like inputs to the shaders that run for each primitive. 有些資料是靜態的(m_constantBufferNeverChanges);有些資料在框架(m_constantBufferChangesEveryFrame)上是固定的,例如相機的位置;有些資料是基本型別特有的,例如色彩和材質(m_constantBufferChangesEveryPrim)。Some data is static (m_constantBufferNeverChanges); some data is constant over the frame (m_constantBufferChangesEveryFrame), such as the position of the camera; and some data is specific to the primitive, such as its color and textures (m_constantBufferChangesEveryPrim).
  • 遊戲轉譯器將這些輸入分成不同的常數緩衝區,以最佳化 CPU 與 GPU 使用的記憶體頻寬。The game renderer separates these inputs into different constant buffers to optimize the memory bandwidth that the CPU and GPU use. 這種方法也有助於將 GPU 追蹤所需的資料量減到最少。This approach also helps to minimize the amount of data that the GPU needs to keep track of. GPU 有很大的命令佇列,而每次遊戲呼叫 Draw 時,該命令連同與其關聯的資料一起進入佇列。The GPU has a big queue of commands, and each time the game calls Draw, that command is queued along with the data associated with it. 當遊戲更新基本類型常數緩衝區並發出下一個 Draw 命令時,圖形驅動程式會將下一個命令及關聯的資料新增到佇列。When the game updates the primitive constant buffer and issues the next Draw command, the graphics driver adds this next command and the associated data to the queue. 如果遊戲繪製 100 個基本類型,則佇列中可能會有 100 個常數緩衝區資料的複本。If the game draws 100 primitives, it could potentially have 100 copies of the constant buffer data in the queue. 若要將傳送到 GPU 的遊戲資料數量減到最少,遊戲會使用不同的基本類型常數緩衝區,其中僅包含每個基本類型的更新。To minimize the amount of data the game is sending to the GPU, the game uses a separate primitive constant buffer that only contains the updates for each primitive.

GameObject::Render 方法GameObject::Render method

void GameObject::Render(
    _In_ ID3D11DeviceContext* context,
    _In_ ID3D11Buffer* primitiveConstantBuffer
    )
{
    if (!m_active || (m_mesh == nullptr) || (m_normalMaterial == nullptr))
    {
        return;
    }

    ConstantBufferChangesEveryPrim constantBuffer;

    // Put the model matrix info into a constant buffer, in world matrix.
    XMStoreFloat4x4(
        &constantBuffer.worldMatrix,
        XMMatrixTranspose(ModelMatrix())
        );

    // Check to see which material to use on the object.
    // If a collision (a hit) is detected, GameObject::Render checks the current context, which 
    // indicates whether the target has been hit by an ammo sphere. If the target has been hit, 
    // this method applies a hit material, which reverses the colors of the rings of the target to 
    // indicate a successful hit to the player. Otherwise, it applies the default material 
    // with the same method. In both cases, it sets the material by calling Material::RenderSetup, 
    // which sets the appropriate constants into the constant buffer. Then, it calls 
    // ID3D11DeviceContext::PSSetShaderResources to set the corresponding texture resource for the 
    // pixel shader, and ID3D11DeviceContext::VSSetShader and ID3D11DeviceContext::PSSetShader 
    // to set the vertex shader and pixel shader objects themselves, respectively.

    if (m_hit && m_hitMaterial != nullptr)
    {
        m_hitMaterial->RenderSetup(context, &constantBuffer);
    }
    else
    {
        m_normalMaterial->RenderSetup(context, &constantBuffer);
    }

    // Update the primitive constant buffer with the object model's info.
    context->UpdateSubresource(primitiveConstantBuffer, 0, nullptr, &constantBuffer, 0, 0);

    // Render the mesh.
    // See MeshObject::Render method below.
    m_mesh->Render(context);
}

MeshObject:: Render 方法MeshObject::Render method

void MeshObject::Render(_In_ ID3D11DeviceContext* context)
{
    // PNTVertex is a struct. stride provides us the size required for all the mesh data
    // struct PNTVertex
    //{
    //  DirectX::XMFLOAT3 position;
    //  DirectX::XMFLOAT3 normal;
    //  DirectX::XMFLOAT2 textureCoordinate;
    //};
    uint32_t stride{ sizeof(PNTVertex) };
    uint32_t offset{ 0 };

    // Similar to the main render loop.
    // Input-layout objects describe how vertex buffer data is streamed into the IA pipeline stage.
    ID3D11Buffer* vertexBuffer{ m_vertexBuffer.get() };
    context->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);

    // IASetIndexBuffer binds an index buffer to the input-assembler stage.
    // For more info, see
    // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-iasetindexbuffer.
    context->IASetIndexBuffer(m_indexBuffer.get(), DXGI_FORMAT_R16_UINT, 0);

    // Binds information about the primitive type, and data order that describes input data for the input assembler stage.
    // For more info, see
    // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-iasetprimitivetopology.
    context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

    // Draw indexed, non-instanced primitives. A draw API submits work to the rendering pipeline.
    // For more info, see
    // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-drawindexed.
    context->DrawIndexed(m_indexCount, 0, 0);
}

DeviceResources::P 重發的方法DeviceResources::Present method

我們會呼叫DeviceResources::P 重發的方法,以顯示我們在緩衝區中放置的內容。We call the DeviceResources::Present method to display the contents we've placed in the buffers.

我們使用「交換鏈結」一詞,針對用於為使用者顯示畫面的緩衝區集合。We use the term swap chain for a collection of buffers that are used for displaying frames to the user. 每次應用程式提供新畫面供顯示時,交換鏈結中的第一個緩衝區會取代顯示之緩衝區的位置。Each time an application presents a new frame for display, the first buffer in the swap chain takes the place of the displayed buffer. 這個程序稱為交換或翻轉。This process is called swapping or flipping. 如需詳細資訊,請參閱交換鏈結For more information, see Swap chains.

  • IDXGISwapChain1介面的現有方法會指示DXGI封鎖,直到發生垂直同步處理(VSync)為止,讓應用程式進入睡眠狀態,直到下一個 VSync 為止。The IDXGISwapChain1 interface's Present method instructs DXGI to block until vertical synchronization (VSync) takes place, putting the application to sleep until the next VSync. 這可確保您不會浪費任何迴圈,呈現永遠不會顯示在螢幕上的畫面。This ensures that you don't waste any cycles rendering frames that will never be displayed to the screen.
  • ID3D11DeviceCoNtext3介面的DiscardView方法會捨棄呈現目標的內容。The ID3D11DeviceContext3 interface's DiscardView method discards the contents of the render target. 只有在完全複寫現有的內容時,這才是有效的作業。This is a valid operation only when the existing contents will be entirely overwritten. 如果使用了 dirty 或 scroll 矩形,則應該移除此呼叫。If dirty or scroll rects are used, then this call should be removed.
  • 使用同一個DiscardView方法,捨棄深度樣板的內容。Using the same DiscardView method, discard the contents of the depth-stencil.
  • HandleDeviceLost方法是用來管理要移除之裝置的案例。The HandleDeviceLost method is used to manage the scenario of the device being removed. 如果裝置已被中斷連線或驅動程式升級而移除,則您必須重新建立所有裝置資源。If the device was removed either by a disconnection or a driver upgrade, then you must recreate all device resources. 如需詳細資訊,請參閱處理 Direct3D 11 中的裝置已移除案例For more information, see Handle device removed scenarios in Direct3D 11.

提示

若要達到平滑的畫面播放速率,您必須確定呈現畫面格的工作數量會符合 VSyncs 之間的時間。To achieve a smooth frame rate, you must ensure that the amount of work to render a frame fits in the time between VSyncs.

// Present the contents of the swap chain to the screen.
void DX::DeviceResources::Present()
{
    // The first argument instructs DXGI to block until VSync, putting the application
    // to sleep until the next VSync. This ensures we don't waste any cycles rendering
    // frames that will never be displayed to the screen.
    HRESULT hr = m_swapChain->Present(1, 0);

    // Discard the contents of the render target.
    // This is a valid operation only when the existing contents will be entirely
    // overwritten. If dirty or scroll rects are used, this call should be removed.
    m_d3dContext->DiscardView(m_d3dRenderTargetView.get());

    // Discard the contents of the depth stencil.
    m_d3dContext->DiscardView(m_d3dDepthStencilView.get());

    // If the device was removed either by a disconnection or a driver upgrade, we 
    // must recreate all device resources.
    if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
    {
        HandleDeviceLost();
    }
    else
    {
        winrt::check_hresult(hr);
    }
}

後續步驟Next steps

本主題說明如何在顯示上轉譯圖形,並提供一些使用的轉譯詞彙的簡短描述(如下所示)。This topic explained how graphics is rendered on the display, and it provides a short description for some of the rendering terms used (below). 深入瞭解轉譯架構 II:遊戲轉譯主題中的轉譯,並瞭解如何準備呈現前所需的資料。Learn more about rendering in the Rendering framework II: Game rendering topic, and learn how to prepare the data needed before rendering.

詞彙和概念Terms and concepts

簡易遊戲場景Simple game scene

簡易的遊戲場景由幾個光源物件所組成。A simple game scene is made up of a few objects with several light sources.

透過一組 X、Y、Z 空間座標定義物件的圖形。An object's shape is defined by a set of X, Y, Z coordinates in space. 遊戲世界裡的真實轉譯位置可藉由套用位置 X、Y、Z 座標的轉換矩陣來判斷。The actual render location in the game world can be determined by applying a transformation matrix to the positional X, Y, Z coordinates. 它也可以有一組材質座標 — 和 V,以 — 指定如何將材質套用至物件。It may also have a set of texture coordinates—U and V—which specify how a material is applied to the object. 這會定義物件的表面屬性,並可讓您查看物件是否有粗糙表面(例如網球)或平滑的光澤表面(例如保齡球球)。This defines the surface properties of the object, and gives you the ability to see whether an object has a rough surface (like a tennis ball), or a smooth glossy surface (like a bowling ball).

轉譯架構會使用場景和物件資訊,以框架的方式重新建立場景框架,使其在您的顯示監視器上處於作用中狀態。Scene and object info is used by the rendering framework to recreate the scene frame by frame, making it come alive on your display monitor.

轉譯管線Rendering pipeline

呈現管線是將3D 場景資訊轉譯成螢幕上所顯示影像的程式。The rendering pipeline is the process by which 3D scene info is translated to an image displayed on screen. Direct3D 11 中,這個管線是可程式化的。In Direct3D 11, this pipeline is programmable. 您可以調整此階段,支援您的轉譯需求You can adapt the stages to support your rendering needs. 搭配常見著色器核心的階段可使用 HLSL 程式語言進行程式設計。Stages that feature common shader cores are programmable by using the HLSL programming language. 它也稱為圖形轉譯管線,或只是管線It's also known as the graphics rendering pipeline, or simply pipeline.

為了協助您建立此管線,您必須熟悉這些詳細資料。To help you create this pipeline, you need to be familiar with these details.

如需詳細資訊,請參閱了解 Direct3D 11 轉譯管線圖形管線For more information, see Understand the Direct3D 11 rendering pipeline and Graphics pipeline.

HLSLHLSL

HLSL 是 DirectX 的高階著色器語言。HLSL is the high-level shader language for DirectX. 使用 HLSL,您可以建立適用于 Direct3D 管線的類似 C 的可程式化著色器。Using HLSL, you can create C-like programmable shaders for the Direct3D pipeline. 如需詳細資訊,請參閱 HLSLFor more information, see HLSL.

著色器Shaders

您可以將著色器視為一組指示,以決定物件介面呈現時的顯示方式。A shader can be thought of as a set of instructions that determine how the surface of an object appears when rendered. 使用 HLSL 又稱為 HLSL 著色器來程式設計這些。Those that are programmed using HLSL are known as HLSL shaders. [HLSL])(#hlsl)著色器的原始程式碼檔具有 .hlsl 副檔名。Source code files for [HLSL])(#hlsl) shaders have the .hlsl file extension. 這些著色器可以在組建階段或執行時間進行編譯,並在執行時間設定為適當的管線階段。These shaders can be compiled at build-time or at runtime, and set at runtime into the appropriate pipeline stage. 已編譯的著色器物件具有 .cso 副檔名。A compiled shader object has a .cso file extension.

Direct3D 9 著色器可以使用著色器模型1、著色器模型2和著色器模型3來設計;Direct3D 10 著色器只能在著色器模型4上設計。Direct3D 9 shaders can be designed using shader model 1, shader model 2 and shader model 3; Direct3D 10 shaders can be designed only on shader model 4. 可以在著色器模型 5 上設計 Direct3D 11 著色器。Direct3D 11 shaders can be designed on shader model 5. 可以在著色器模型 5.1 上設計 Direct3D 11.3 和 Direct3D 12,且也可以在著色器模型 6 上設計 Direct3D 12。Direct3D 11.3 and Direct3D 12 can be designed on shader model 5.1, and Direct3D 12 can also be designed on shader model 6.

頂點著色器和像素著色器Vertex shaders and pixel shaders

資料會輸入圖形管線做為基本類型的資料流程,並由各種著色器(例如頂點著色器和圖元著色器)處理。Data enters the graphics pipeline as a stream of primitives, and is processed by various shaders such as the vertex shaders and pixel shaders.

頂點著色器處理頂點,通常執行轉換、皮膚形變與光照等作業。Vertex shaders processes vertices, typically performing operations such as transformations, skinning, and lighting. 像素著色器可啟用豐富的陰影技術,例如個別像素光線和後處理。Pixel shaders enables rich shading techniques such as per-pixel lighting and post-processing. 其結合常數變數、紋理資料、插補每個頂點值和其他資料的程式,以產生每個像素的輸出。It combines constant variables, texture data, interpolated per-vertex values, and other data to produce per-pixel outputs.

著色器階段Shader stages

定義一系列這些各式著色器處理這個在轉譯管線中稱為著色器階段的基本類型串流。A sequence of these various shaders defined to process this stream of primitives is known as shader stages in a rendering pipeline. 實際階段取決於 Direct3D 的版本,但通常會包含頂點、幾何及像素階段。The actual stages depend on the version of Direct3D, but usually include the vertex, geometry, and pixel stages. 還有其他階段,例如鑲嵌的輪廓以及網域著色器,和計算著色器。There are also other stages, such as the hull and domain shaders for tessellation, and the compute shader. 所有這些階段都是使用HLSL完全可程式化的。All these stages are completely programmable using HLSL. 如需詳細資訊,請參閱圖形管線For more information, see Graphics pipeline.

各種不同的著色器檔案格式Various shader file formats

以下是著色器程式碼副檔名。Here are the shader code file extensions.

  • 副檔名為的檔案會 .hlsl 保存 [HLSL])(#hlsl)原始程式碼。A file with the .hlsl extension holds [HLSL])(#hlsl) source code.
  • 副檔名為的檔案會 .cso 保存已編譯的著色器物件。A file with the .cso extension holds a compiled shader object.
  • 副檔名為的檔案 .h 是標頭檔,但在著色器程式碼內容中,此標頭檔會定義保存著色器資料的位元組陣列。A file with the .h extension is a header file, but in a shader code context, this header file defines a byte array that holds shader data.
  • 副檔名為的檔案 .hlsli 包含常數緩衝區的格式。A file with the .hlsli extension contains the format of the constant buffers. 在範例遊戲中,檔案是著色器 > ConstantBuffers. hlsliIn the sample game, the file is Shaders > ConstantBuffers.hlsli.

注意

您可以在 .cso 執行時間載入檔案,或 .h 在可執行檔程式碼中加入檔案,以內嵌著色器。You embed a shader either by loading a .cso file at runtime, or by adding a .h file in your executable code. 但您不能同時針對相同的著色器使用這兩者。But you wouldn't use both for the same shader.

深入了解 DirectXDeeper understanding of DirectX

Direct3D 11 是一組 Api,可協助我們建立圖形密集應用程式(例如遊戲)的圖形,我們想要有良好的圖形卡來處理密集的計算。Direct3D 11 is a set of APIs that can help us to create graphics for graphics intensive applications such as games, where we want to have a good graphics card to process intensive computation. 本章節簡短說明 Direct3D 11 圖形的程式設計蓋瑱:資源、子資源、裝置以及裝置操作。This section briefly explains the Direct3D 11 graphics programming concepts: resource, subresource, device, and device context.

資源Resource

您可以將資源(也稱為裝置資源)視為如何呈現物件的相關資訊,例如材質、位置或色彩。You can think of resources (also known as device resources) as info about how to render an object, such as texture, position, or color. 資源會將資料提供給管線,並定義在場景中呈現的內容。Resources provide data to the pipeline, and define what is rendered during your scene. 資源可以從您的遊戲媒體載入,或在執行時間以動態方式建立。Resources can be loaded from your game media, or created dynamically at run time.

事實上,資源是 Direct3D 管線可以存取的記憶體中的區域。A resource is, in fact, an area in memory that can be accessed by the Direct3D pipeline. 為了讓管線有效地存取記憶體,提供給管線的資料 (例如,輸入幾何、著色器資源及紋理) 必須儲存在資源中。In order for the pipeline to access memory efficiently, data that is provided to the pipeline (such as input geometry, shader resources, and textures) must be stored in a resource. 所有 Direct3D 資源衍生兩種類型的資源︰緩衝區或紋理。There are two types of resources from which all Direct3D resources derive: a buffer or a texture. 每個管線階段可使用多達 128 種資源。Up to 128 resources can be active for each pipeline stage. 如需詳細資訊,請參閱資源For more information, see Resources.

子資源Subresource

子資源這個詞彙指的是資源的子集。The term subresource refers to a subset of a resource. Direct3D 可以參考整個資源,或可參考資源的子集。Direct3D can reference an entire resource, or it can reference subsets of a resource. 如需詳細資訊,請參閱子資源For more information, see Subresource.

深度樣板Depth-stencil

深度樣板資源包含格式及緩衝區,以保留深度和樣板資訊。A depth-stencil resource contains the format and buffer to hold depth and stencil information. 使用紋理資源建立它。It is created using a texture resource. 如需如何建立深度樣板資源的詳細資訊,請參閱設定深度樣板功能For more information on how to create a depth-stencil resource, see Configuring Depth-Stencil Functionality. 我們透過使用 ID3D11DepthStencilView介面實作深度樣板檢視來存取深度樣板資源。We access the depth-stencil resource through the depth-stencil view implemented using the ID3D11DepthStencilView interface.

[深度資訊] 告訴我們多邊形的哪些區域位於其他地方,讓我們能夠判斷哪些是隱藏的。Depth info tells us which areas of polygons are behind others, so that we can determine which are hidden. 樣板資訊可告訴我們遮罩哪一個像素。Stencil info tells us which pixels are masked. 它可以用來製作特效,因為它會判斷是否繪製像素;設定 1 或 0 的位元。It can be used to produce special effects since it determines whether a pixel is drawn or not; sets the bit to a 1 or 0.

如需詳細資訊,請參閱深度樣板視圖深度緩衝區和樣板緩衝區For more information, see Depth-stencil view, depth buffer, and stencil buffer.

轉譯目標Render target

轉譯目標是我們可以在轉譯階段結尾撰寫的資源。A render target is a resource that we can write to at the end of a render pass. 常使用ID3D11Device::CreateRenderTargetView方法建立它,該方法使用交換鏈結背景緩衝區(這也是資源)做為輸入參數。It is commonly created using the ID3D11Device::CreateRenderTargetView method using the swap chain back buffer (which is also a resource) as the input parameter.

每個轉譯目標也該有對應深度樣板檢視,因為在使用它之前,當我們使用OMSetRenderTargets來設定轉譯目標時,它也需要深度樣板檢視。Each render target should also have a corresponding depth-stencil view because when we use OMSetRenderTargets to set the render target before using it, it requires also a depth-stencil view. 我們透過使用ID3D11RenderTargetView介面實作轉譯目標來存取轉譯目標資源。We access the render target resource through the render target view implemented using the ID3D11RenderTargetView interface.

裝置Device

您可以想像裝置為配置和終結物件、轉譯基本專案,以及透過圖形驅動程式與圖形配接器通訊的方式。You can imagine a device as a way to allocate and destroy objects, render primitives, and communicate with the graphics card through the graphics driver.

更精確的說明,Direct3D 裝置是 Direct3D 的轉譯元件。For a more precise explanation, a Direct3D device is the rendering component of Direct3D. 裝置封裝並儲存呈現狀態、執行轉換照明作業,並將影像點陣化到表面。A device encapsulates and stores the rendering state, performs transformations and lighting operations, and rasterizes an image to a surface. 如需詳細資訊,請參閱 裝置For more information, see Devices

裝置是由ID3D11Device介面表示。A device is represented by the ID3D11Device interface. 換句話說, ID3D11Device介面代表虛擬顯示介面卡,用來建立裝置所擁有的資源。In other words, the ID3D11Device interface represents a virtual display adapter, and is used to create resources that are owned by a device.

ID3D11Device 有不同的版本。There are different versions of ID3D11Device. ID3D11Device5是最新版本,並將新方法新增至ID3D11Device4中的。ID3D11Device5 is the latest version, and adds new methods to those in ID3D11Device4. 如需 Direct3D 如何與基礎硬體通訊的詳細資訊,請參閱Windows 裝置的驅動程式模型 (WDDM) 架構For more information on how Direct3D communicates with the underlying hardware, see Windows Device Driver Model (WDDM) architecture.

每個應用程式至少必須有一個裝置;大部分的應用程式只會建立一個。Each application must have at least one device; most applications create only one. 藉由呼叫D3D11CreateDeviceD3D11CreateDeviceAndSwapChain ,並使用D3D_DRIVER_TYPE旗標指定驅動程式類型,為安裝在電腦上的其中一個硬體驅動程式建立裝置。Create a device for one of the hardware drivers installed on your machine by calling D3D11CreateDevice or D3D11CreateDeviceAndSwapChain and specifying the driver type with the D3D_DRIVER_TYPE flag. 每個裝置可以使用一或多個裝置內容,視所需的功能而定。Each device can use one or more device contexts, depending on the functionality desired. 如需詳細資訊,請參閱 D3D11CreateDevice 函式For more information, see D3D11CreateDevice function.

裝置內容Device context

裝置內容是用來設定管線狀態,並使用裝置所擁有的資源來產生轉譯命令。A device context is used to set pipeline state, and generate rendering commands using the resources owned by a device.

Direct3D 11 實作兩種類型的裝置內容,一種為立即轉譯,另一種為延遲轉譯;這兩個內容都可以使用ID3D11DeviceContext介面顯示。Direct3D 11 implements two types of device contexts, one for immediate rendering and the other for deferred rendering; both contexts are represented with an ID3D11DeviceContext interface.

ID3D11DeviceContext介面有不同的版本;ID3D11DeviceContext4將新方法新增至ID3D11DeviceContext3中。The ID3D11DeviceContext interfaces have different versions; ID3D11DeviceContext4 adds new methods to those in ID3D11DeviceContext3.

ID3D11DeviceCoNtext4是在 Windows 10 建立者更新中引進,而是ID3D11DeviceCoNtext介面的最新版本。ID3D11DeviceContext4 is introduced in the Windows 10 Creators Update, and is the latest version of the ID3D11DeviceContext interface. 以 Windows 10 建立者更新(含)以後版本為目標的應用程式應該使用此介面,而不是舊版。Applications targeting Windows 10 Creators Update and later should use this interface instead of earlier versions. 如需詳細資訊,請參閱 ID3D11DeviceContext4For more information, see ID3D11DeviceContext4.

DX::DeviceResourcesDX::DeviceResources

DX::D eviceresources類別位於 DeviceResources 檔案中DeviceResources.cpp / .h ,並控制所有 DirectX 裝置資源。The DX::DeviceResources class is in the DeviceResources.cpp/.h files, and controls all of DirectX device resources.

BufferBuffer

緩衝區資源是一個完整輸入的資料集合,由元素所組成。A buffer resource is a collection of fully typed data grouped into elements. 您可以使用緩衝區存放各式各樣的資料,包含位置向量、法向向量、頂點緩衝區中的紋理座標、索引緩衝區中的索引、或裝置狀態。You can use buffers to store a wide variety of data, including position vectors, normal vectors, texture coordinates in a vertex buffer, indexes in an index buffer, or device state. Buffer 元素可以包含已壓縮的資料值(例如R8G8B8A8介面值)、單一8位整數或 4 32 位浮點值。Buffer elements can include packed data values (such as R8G8B8A8 surface values), single 8-bit integers, or four 32-bit floating point values.

有三種可用的緩衝區類型: [頂點緩衝區]、[索引緩衝區] 和 [常數緩衝區]。There are three types of buffers available: vertex buffer, index buffer, and constant buffer.

頂點緩衝區Vertex buffer

包含用來定義您的幾何頂點資料。Contains the vertex data used to define your geometry. 頂點資料包括位置座標、色彩資料、紋理座標資料、一般資料等等。Vertex data includes position coordinates, color data, texture coordinate data, normal data, and so on.

索引緩衝區Index buffer

包含整數位移至頂點緩衝區,可用於更有效率地呈現基本類型。Contains integer offsets into vertex buffers and are used to render primitives more efficiently. 索引緩衝區包含連續的 16 位元或 32 位元索引集,而每個索引用來識別頂點緩衝區中的頂點。An index buffer contains a sequential set of 16-bit or 32-bit indices; each index is used to identify a vertex in a vertex buffer.

常數緩衝區或著色器-常數緩衝區Constant buffer, or shader-constant buffer

可讓您有效率地提供著色器資料至管線。Allows you to efficiently supply shader data to the pipeline. 您可以使用常數緩衝區做為著色器的輸入,其執行每個基本類別,並儲存轉譯管線資料流輸出階段的結果。You can use constant buffers as inputs to the shaders that run for each primitive and store results of the stream-output stage of the rendering pipeline. 概念上,常數緩衝區看起來很像單一元素頂點緩衝區。Conceptually, a constant buffer looks just like a single-element vertex buffer.

緩衝區的設計和實作Design and implementation of buffers

您可以根據資料類型來設計緩衝區,例如,在我們的範例遊戲中,會為靜態資料建立一個緩衝區,另一個則是針對在框架上固定的資料,而另一個則用於基本的特定資料。You can design buffers based on the data type, for example, like in our sample game, one buffer is created for static data, another for data that's constant over the frame, and another for data that's specific to a primitive.

所有緩衝區類型都藉由ID3D11Buffer介面進行封裝,且您可以藉由呼叫ID3D11Device::CreateBuffer建立緩衝區資源。All buffer types are encapsulated by the ID3D11Buffer interface and you can create a buffer resource by calling ID3D11Device::CreateBuffer. 但必須將緩衝區繫結至管線才能存取。But a buffer must be bound to the pipeline before it can be accessed. 緩衝區可以同時繫結到多個管線階段,以供讀取。Buffers can be bound to multiple pipeline stages simultaneously for reading. 緩衝區也可以系結至單一管線階段進行寫入;不過,相同的緩衝區無法同時進行讀取和寫入。A buffer can also be bound to a single pipeline stage for writing; however, the same buffer cannot be bound for both reading and writing simultaneously.

您可以利用這些方式來系結緩衝區。You can bind buffers in these ways.

  • 藉由呼叫ID3D11DeviceCoNtext方法(例如ID3D11DeviceCoNtext:: IASetVertexBuffersID3D11DeviceCoNtext:: IASetIndexBuffer)到輸入組譯工具階段。To the input-assembler stage by calling ID3D11DeviceContext methods such as ID3D11DeviceContext::IASetVertexBuffers and ID3D11DeviceContext::IASetIndexBuffer.
  • 藉由呼叫ID3D11DeviceCoNtext:: SOSetTargets,對資料流程輸出階段進行。To the stream-output stage by calling ID3D11DeviceContext::SOSetTargets.
  • 藉由呼叫著色器方法(例如ID3D11DeviceCoNtext:: VSSetConstantBuffers)至著色器階段。To the shader stage by calling shader methods, such as ID3D11DeviceContext::VSSetConstantBuffers.

如需詳細資訊,請參閱 Direct3D 11 中的緩衝區簡介For more information, see Introduction to buffers in Direct3D 11.

DXGIDXGI

Microsoft DirectX 圖形基礎結構(DXGI)是一個子系統,其封裝了 Direct3D 所需的一些低層級工作。Microsoft DirectX Graphics Infrastructure (DXGI) is a subsystem that encapsulates some of the low-level tasks that are needed by Direct3D. 在多執行緒應用程式中使用 DXGI 時必須特別小心,以確保不會發生鎖死。Special care must be taken when using DXGI in a multithreaded application to ensure that deadlocks don't occur. 如需詳細資訊,請參閱多執行緒和 DXGIFor more info, see Multithreading and DXGI

功能層級Feature level

功能層級是在 Direct3D 11 中導入的概念,以處理新的和現有的電腦中各式各樣的視訊卡。Feature level is a concept introduced in Direct3D 11 to handle the diversity of video cards in new and existing machines. 功能層級是一組妥善定義的圖形處理單元(GPU)功能。A feature level is a well-defined set of graphics processing unit (GPU) functionality.

每個視訊卡實作一個特定層級的 DirectX 功能,依照所安裝的 GPU 而定。Each video card implements a certain level of DirectX functionality depending on the GPUs installed. 在 Microsoft Direct3D 舊版中,可以找出視訊卡實作的 Direct3D 版本,然後隨之程式設計您的應用程式。In prior versions of Microsoft Direct3D, you could find out the version of Direct3D the video card implemented, and then program your application accordingly.

使用功能層級,當您建立裝置,您可以嘗試建立您所要求的功能層級的裝置。With feature level, when you create a device, you can attempt to create a device for the feature level that you want to request. 如果裝置建立運作,該功能層級則存在,否則,硬體不支援該功能層級。If the device creation works, that feature level exists, if not, the hardware does not support that feature level. 您可以嘗試在較低的功能層級重新建立裝置,也可以選擇結束應用程式。You can either try to recreate a device at a lower feature level, or you can choose to exit the application. 例如,12_0 功能層級需要 Direct3D 11.3 或 Direct3D 12,以及著色器模型5.1。For instance, the 12_0 feature level requires Direct3D 11.3 or Direct3D 12, and shader model 5.1. 如需詳細資訊,請參閱Direct3D 功能層級:每項功能層級的概觀For more information, see Direct3D feature levels: Overview for each feature level.

使用功能層級,您可以開發 Direct3D 9、Microsoft Direct3D 10 或 Direct3D 11 的應用程式,並且在 9、10 或 11 硬體(有一些例外)上執行。Using feature levels, you can develop an application for Direct3D 9, Microsoft Direct3D 10, or Direct3D 11, and then run it on 9, 10, or 11 hardware (with some exceptions). 如需詳細資訊,請參閱Direct3D 功能層級For more information, see Direct3D feature levels.

立體著色運算Stereo rendering

立體著色運算用來提升深度的視覺效果。Stereo rendering is used to enhance the illusion of depth. 它使用兩個圖像,一個從左眼,另一個從右眼,在螢幕上顯示場景。It uses two images, one from the left eye and the other from the right eye to display a scene on the display screen.

我們從數學觀點來套用立體著色運算矩陣,也就是稍微水平位移至右側和左邊,以規則單投影矩陣來達成。Mathematically, we apply a stereo projection matrix, which is a slight horizontal offset to the right and to the left, of the regular mono projection matrix to achieve this.

我們進行了兩個轉譯階段,以在此範例遊戲中達到身歷聲轉譯。We did two rendering passes to achieve stereo rendering in this sample game.

  • 繫結至右邊轉譯目標,適用於右側投影,然後繪製的基本類型物件。Bind to right render target, apply right projection, then draw the primitive object.
  • 繫結至左邊轉譯目標,適用於左側投影,然後繪製的基本類型物件。Bind to left render target, apply left projection, then draw the primitive object.

相機和座標空間Camera and coordinate space

遊戲有現成的程式碼可以用自己的座標系統來更新世界 (有時候稱為世界空間或場景空間)。The game has the code in place to update the world in its own coordinate system (sometimes called the world space or scene space). 所有物件 (包括相機) 都在這個空間設定放置及方向。All objects, including the camera, are positioned and oriented in this space. 如需詳細資訊,請參閱座標系統.For more information, see Coordinate systems.

頂點著色器則負責使用下列演算法 (其中 V 是向量而 M 是矩陣),將模型座標轉換為裝置座標的繁重工作。A vertex shader does the heavy lifting of converting from the model coordinates to device coordinates with the following algorithm (where V is a vector and M is a matrix).

V(device) = V(model) x M(model-to-world) x M(world-to-view) x M(view-to-device)

  • M(model-to-world)這是模型座標到全局座標的轉換矩陣,也稱為「世界轉換矩陣」。M(model-to-world) is a transformation matrix for model coordinates to world coordinates, also known as the World transform matrix. 這是由基本類型提供。This is provided by the primitive.
  • M(world-to-view)這是全局座標的轉換矩陣,用於視圖座標,也稱為「視圖轉換矩陣」。M(world-to-view) is a transformation matrix for world coordinates to view coordinates, also known as the View transform matrix.
    • 這是由相機的視圖矩陣提供的。This is provided by the view matrix of the camera. 它是由相機的位置定義,並包含外觀向量([外觀] 是從相機直接指向場景的向量,以及與它垂直的「查閱」向量)。It's defined by the camera's position along with the look vectors (the look at vector that points directly into the scene from the camera, and the look up vector that is upwards perpendicular to it).
    • 在範例遊戲中, m_viewMatrix是「視圖」轉換矩陣,並使用攝影機:: SetViewParams來計算。In the sample game, m_viewMatrix is the view transformation matrix, and is calculated using Camera::SetViewParams.
  • M(view-to-device)這是 view 座標到裝置座標的轉換矩陣,也稱為投射轉換矩陣M(view-to-device) is a transformation matrix for view coordinates to device coordinates, also known as the Projection transform matrix.
    • 這是由相機的投影提供的。This is provided by the projection of the camera. 它會提供在最後場景中實際顯示該空間大小的相關資訊。It provides info about how much of that space is actually visible in the final scene. 「視圖」(FoV)、「外觀比例」和「裁剪」平面的欄位會定義投射轉換矩陣。The field of view (FoV), aspect ratio, and clipping planes define the projection transform matrix.
    • 在範例遊戲中, m_projectionMatrix定義轉換成投影座標,使用攝影機:: SetProjParams來計算(針對身歷聲投影,您會使用兩個投射矩陣,分別 — 用於每個眼睛的觀點)。In the sample game, m_projectionMatrix defines transformation to the projection coordinates, calculated using Camera::SetProjParams (For stereo projection, you use two projection matrices—one for each eye's view).

中的著色器 VertexShader.hlsl 程式碼會從常數緩衝區載入這些向量和矩陣,並針對每個頂點執行此轉換。The shader code in VertexShader.hlsl is loaded with these vectors and matrices from the constant buffers, and performs this transformation for every vertex.

座標轉換Coordinate transformation

Direct3D 使用三個轉換在像素座標(螢幕空間)中變更您的 3D 模型座標。Direct3D uses three transformations to change your 3D model coordinates into pixel coordinates (screen space). 這些轉換為世界轉換、檢視轉換和投影轉換。These transformations are world transform, view transform, and projection transform. 如需詳細資訊,請參閱轉換總覽For more info, see Transform overview.

世界轉換矩陣World transform matrix

世界矩陣轉換將座標從模型空間 (其中頂點是相對於模型的區域原點定義的) 變更為世界空間,其中頂點是相對於通用於所有場景物件的原點定義的。A world transform changes coordinates from model space, where vertices are defined relative to a model's local origin, to world space, where vertices are defined relative to an origin common to all the objects in a scene. 簡單來說,世界矩陣轉換將模型放置到世界中;因此成為它的名字。In essence, the world transform places a model into the world; hence its name. 如需詳細資訊,請參閱 世界矩陣轉換For more information, see World transform.

檢視轉換矩陣View transform matrix

檢視轉換將檢視器放置在世界空間中,並將頂點轉換成相機空間。The view transform locates the viewer in world space, transforming vertices into camera space. 在相機空間,相機或檢視器位於原點,朝著正 z 方向看。In camera space, the camera, or viewer, is at the origin, looking in the positive z-direction. 如需詳細資訊,請前往檢視轉換For more info, go to View transform.

投影轉換矩陣Projection transform matrix

投影轉換會將檢視範圍轉換成立方體形狀。The projection transform converts the viewing frustum to a cuboid shape. 檢視範圍是場景中相對於檢視區相機放置的 3D 體積。A viewing frustum is a 3D volume in a scene positioned relative to the viewport's camera. 檢視區是 3D 場景投影到此的 2D 矩形。A viewport is a 2D rectangle into which a 3D scene is projected. 如需詳細資訊,請參閱 檢視區和裁剪For more information, see Viewports and clipping

由於檢視範圍的近端小於遠端,這會影響靠近相機之物體的展開;這也是透視套用到場景的方式。Because the near end of the viewing frustum is smaller than the far end, this has the effect of expanding objects that are near to the camera; this is how perspective is applied to the scene. 因此,接近播放程式的物件會變大;進一步顯示的物件會變小。So objects that are closer to the player appear larger; objects that are further away appear smaller.

在數學上,投射轉換是一個矩陣,通常都是尺規和透視圖投影。Mathematically, the projection transform is a matrix that is typically both a scale and a perspective projection. 就像相機鏡頭的功能。It functions like the lens of a camera. 如需詳細資訊,請參閱 投影轉換For more information, see Projection transform.

取樣器狀態Sampler state

取樣器狀態使用紋理定址模式、篩選以及詳細層級來判斷如何採樣紋理資料。Sampler state determines how texture data is sampled using texture addressing modes, filtering, and level of detail. 每次從材質讀取材質圖元(或材質)時,就會完成取樣。Sampling is done each time a texture pixel (or texel) is read from a texture.

材質包含材質的陣列。A texture contains an array of texels. 每個材質的位置都是以表示 (u,v) ,其中 u 是寬度,而 v 是高度,並根據材質寬度和高度對應0和1。The position of each texel is denoted by (u,v), where u is the width and v is the height, and is mapped between 0 and 1 based on the texture width and height. 紋理座標的結果在取樣紋理時用來定址紋素。The resulting texture coordinates are used to address a texel when sampling a texture.

紋理座標在 0 以下或 1 以上時,紋理位址模式定義紋理座標如何定址紋素的位置。When texture coordinates are below 0 or above 1, the texture address mode defines how the texture coordinate addresses a texel location. 例如,使用TextureAddressMode.Clamp時,在取樣前,任何 0-1 範圍以外的座標會限制最大值為 1,以及最小值為 0。For example, when using TextureAddressMode.Clamp, any coordinate outside the 0-1 range is clamped to a maximum value of 1, and minimum value of 0 before sampling.

如果材質太大或太小而無法用於多邊形,則會篩選材質以符合空間。If the texture is too large or too small for the polygon, then the texture is filtered to fit the space. 放大篩選會放大紋理,縮小比例篩選會縮小紋理以符合較小的區域。A magnification filter enlarges a texture, a minification filter reduces the texture to fit into a smaller area. 紋理放大重複範例紋素給一或多個產生模糊圖像的位址。Texture magnification repeats the sample texel for one or more addresses which yields a blurrier image. 材質縮制比較複雜,因為它需要將一個以上的材質值結合成單一值。Texture minification is more complicated because it requires combining more than one texel values into a single value. 取決於紋理資料,這可能會造成鋸齒化或鋸齒狀的邊緣。This can cause aliasing or jagged edges depending on the texture data. 縮小最常用的方法是使用 Mipmap。The most popular approach for minification is to use a mipmap. Mipmap 是多層級紋理。A mipmap is a multi-level texture. 每個層級的大小為2的乘冪,小於上一個層級到1x1 材質。The size of each level is a power of 2 smaller than the previous level down to a 1x1 texture. 使用縮小時,遊戲選擇 mipmap 層級,最靠近轉譯時間所需的大小。When minification is used, a game chooses the mipmap level closest to the size that is needed at render time.

BasicLoader 類別The BasicLoader class

BasicLoader是簡易載入器類別,提供支援從磁碟上的檔案下載著色器、紋理和網格。BasicLoader is a simple loader class that provides support for loading shaders, textures, and meshes from files on disk. 它提供同步和非同步的方法。It provides both synchronous and asynchronous methods. 在此範例遊戲中,您 BasicLoader.h/.cpp 可以在 [公用程式] 資料夾中找到這些檔案。In this sample game, the BasicLoader.h/.cpp files are found in the Utilities folder.

如需詳細資訊,請參閱 基本載入器For more information, see Basic Loader.