呈现框架 I:呈现简介Rendering framework I: Intro to rendering


本主题是使用 DirectX 教程系列创建简单通用 Windows 平台(UWP)游戏的一部分。This topic is part of the Create a simple Universal Windows Platform (UWP) game with DirectX tutorial series. 该链接上的主题设置了序列的上下文。The topic at that link sets the context for the series.

到目前为止,我们已经介绍了如何构建通用 Windows 平台(UWP)游戏,以及如何定义一台状态机来处理游戏的流动。So far we've covered how to structure a Universal Windows Platform (UWP) game, and how to define a state machine to handle the flow of the game. 现在,可以学习如何开发呈现框架了。Now it's time to learn how to develop the rendering framework. 让我们看看示例游戏如何使用 Direct3D 11 呈现游戏场景。Let's look at how the sample game renders the game scene using Direct3D 11.

Direct3D 11 包含一组 Api,可提供对高性能图形硬件高级功能的访问,这些功能可用于为图形密集型应用程序(如游戏)创建3D 图形。Direct3D 11 contains a set of APIs that provide access to the advanced features of high-performance graphic hardware that can be used to create 3D graphics for graphics-intensive applications such as games.

在屏幕上渲染游戏图形,这基本上就是在屏幕上渲染一系列的帧。Rendering game graphics on-screen means basically rendering a sequence of frames on-screen. 在每个帧中,你必须基于视图呈现场景中可见的对象。In each frame, you have to render objects that are visible in the scene, based on the view.

为呈现框架,必须将需要的场景信息传递到硬件,以便它可以显示在屏幕上。In order to render a frame, you have to pass the required scene information to the hardware so that it can be displayed on the screen. 如果你希望在屏幕上显示任何内容,则需要在游戏开始运行后即开始呈现。If you want to have anything displayed on screen, you need to start rendering as soon as the game starts running.


设置基本渲染框架以显示 UWP DirectX 游戏的图形输出。To set up a basic rendering framework to display the graphics output for a UWP DirectX game. 你可以将其分为三个步骤。You can loosely break that down into these three steps.

  1. 建立与图形接口的连接。Establish a connection to the graphics interface.
  2. 创建绘制图形所需的资源。Create the resources needed to draw the graphics.
  3. 通过呈现框架来显示图形。Display the graphics by rendering the frame.

本主题介绍如何呈现图形,涵盖步骤1和3。This topic explains how graphics are rendered, covering steps 1 and 3.

渲染框架 II:游戏渲染介绍了步骤 2 — 如何设置渲染框架,以及如何在进行渲染之前准备数据。Rendering framework II: Game rendering covers step 2—how to set up the rendering framework, and how data is prepared before rendering can happen.

入门Get started

最好先熟悉基本的图形和渲染概念。It's a good idea to familiarize yourself with basic graphics and rendering concepts. 如果你不熟悉 Direct3D 和呈示,请参阅有关本主题中使用的图形和呈现术语的简短说明的术语和概念If you're new to Direct3D and rendering, see Terms and concepts for a brief description of the graphics and rendering terms used in this topic.

对于此游戏, GameRenderer类表示此示例游戏的呈现器。For this game, the GameRenderer class represents the renderer for this sample game. 它负责创建和维护用于生成游戏视觉效果的所有 Direct3D 11 和 Direct2D 对象。It's responsible for creating and maintaining all the Direct3D 11 and Direct2D objects used to generate the game visuals. 它还保留对Simple3DGame对象的引用,该对象用于检索要呈现的对象列表,以及用于打印头显示的游戏的状态(HUD)。It also maintains a reference to the Simple3DGame object used to retrieve the list of objects to render, as well as status of the game for the heads-up display (HUD).

在教程的这个部分,我们将重点介绍在游戏中呈现 3D 对象。In this part of the tutorial, we'll focus on rendering 3D objects in the game.

与图形界面建立连接Establish a connection to the graphics interface

有关访问用于呈现的硬件的信息,请参阅定义游戏的 UWP 应用框架主题。For info about accessing the hardware for rendering, see the Define the game's UWP app framework topic.

App:: Initialize 方法The App::Initialize method

如下所示, std:: make_shared函数用于创建对 DX 的Shared_ptr ::D eviceresources,这也提供对设备的访问。The std::make_shared function, as shown below, is used to create a shared_ptr to DX::DeviceResources, which also provides access to the device.

在 Direct3D 11 中,设备用于分配和销毁对象、呈现原语,并通过图形驱动程序与显卡通信。In Direct3D 11, a device is used to allocate and destroy objects, render primitives, and communicate with the graphics card through the graphics driver.

void Initialize(CoreApplicationView const& applicationView)

    // At this point we have access to the device. 
    // We can create the device-dependent resources.
    m_deviceResources = std::make_shared<DX::DeviceResources>();

通过呈现帧显示图形Display the graphics by rendering the frame

游戏场景需要在游戏启动时呈现。The game scene needs to render when the game is launched. GameMain:: Run方法中的呈现开始说明如下所示。The instructions for rendering start in the GameMain::Run method, as shown below.

简单的流程就是这样。The simple flow is this.

  1. 更新Update
  2. RenderRender
  3. 存在Present

GameMain::Run 方法GameMain::Run method

void GameMain::Run()
    while (!m_windowClosed)
        if (m_visible) // if the window is visible
            switch (m_updateState)
                m_renderNeeded = false;
    m_game->OnSuspending();  // Exiting due to window close, so save state.


有关如何在GameMain:: Update方法中更新游戏状态的详细信息,请参阅游戏流管理主题。See the Game flow management topic for more information about how game states are updated in the GameMain::Update method.


呈现是通过调用GameMain:: Run中的GameRenderer:: Render方法实现的。Rendering is implemented by calling the GameRenderer::Render method from GameMain::Run.

如果启用了立体声渲染,则有两个渲染刀路, — 一个用于向左的眼睛,一个用于向右。If stereo rendering is enabled, then there are two rendering passes—one for the left eye and one for the right. 在每个呈现通道中,我们将呈现目标和深度模板视图绑定到设备。In each rendering pass, we bind the render target and the depth-stencil view to the device. 我们还会清除之后的深度模板视图。We also clear the depth-stencil view afterward.


立体呈现可以使用其他方法实现,如使用顶点实例或几何着色器的单通道立体声。Stereo rendering can be achieved using other methods such as single pass stereo using vertex instancing or geometry shaders. 双呈现方法是一种更慢但更方便的方法来实现立体声呈现。The two-rendering-passes method is a slower but more convenient way to achieve stereo rendering.

游戏运行并加载资源后,会更新投影矩阵,每次呈现通过。Once the game is running, and resources are loaded, we update the projection matrix, once per rendering pass. 每个视图的对象略有不同。Objects are slightly different from each view. 接下来,我们设置图形渲染管道Next, we set up the graphics rendering pipeline.


请参阅创建和加载 DirectX 图形资源了解有关如何加载资源的详细信息。See Create and load DirectX graphic resources for more information on how resources are loaded.

在此示例游戏中,呈现器旨在在所有对象中使用标准顶点布局。In this sample game, the renderer is designed to use a standard vertex layout across all objects. 这简化了着色器设计,并允许在着色器之间轻松更改,而与对象的几何图形无关。This simplifies the shader design, and allows for easy changes between shaders, independent of the objects' geometry.

GameRenderer::Render 方法GameRenderer::Render method

将 Direct3D 上下文设置为使用输入顶点布局。We set the Direct3D context to use an input vertex layout. 输入布局对象介绍了如何将顶点缓冲区数据传输到呈现管道Input-layout objects describe how vertex buffer data is streamed into the rendering pipeline.

接下来,我们将 Direct3D 上下文设置为使用前面定义的常量缓冲区,这些缓冲区由顶点着色器管道阶段和像素着色器管道阶段使用。Next, we set the Direct3D context to use the constant buffers defined earlier, which are used by the vertex shader pipeline stage and the pixel shader pipeline stage.


请参阅呈现框架 II:游戏呈现了解有关常量缓冲区定义的详细信息。See Rendering framework II: Game rendering for more information about definition of the constant buffers.

由于为管道中的所有着色器使用相同的常量缓冲区输入布局和集,因此每一帧设置一次。Because the same input layout and set of constant buffers is used for all shaders that are in the pipeline, it's set up once per frame.

void GameRenderer::Render()
    bool stereoEnabled{ m_deviceResources->GetStereoState() };

    auto d3dContext{ m_deviceResources->GetD3DDeviceContext() };
    auto d2dContext{ m_deviceResources->GetD2DDeviceContext() };

    int renderingPasses = 1;
    if (stereoEnabled)
        renderingPasses = 2;

    for (int i = 0; i < renderingPasses; i++)
        // Iterate through the number of rendering passes to be completed.
        // 2 rendering passes if stereo is enabled.
        if (i > 0)
            // Doing the Right Eye View.
            ID3D11RenderTargetView* const targets[1] = { m_deviceResources->GetBackBufferRenderTargetViewRight() };

            // Resets render targets to the screen.
            // OMSetRenderTargets binds 2 things to the device.
            // 1. Binds one render target atomically to the device.
            // 2. Binds the depth-stencil view, as returned by the GetDepthStencilView method, to the device.
            // For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-omsetrendertargets

            d3dContext->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());

            // Clears the depth stencil view.
            // A depth stencil view contains the format and buffer to hold depth and stencil info.
            // For more info about depth stencil view, go to: 
            // https://docs.microsoft.com/windows/uwp/graphics-concepts/depth-stencil-view--dsv-
            // A depth buffer is used to store depth information to control which areas of 
            // polygons are rendered rather than hidden from view. To learn more about a depth buffer,
            // go to: https://docs.microsoft.com/windows/uwp/graphics-concepts/depth-buffers
            // A stencil buffer is used to mask pixels in an image, to produce special effects. 
            // The mask determines whether a pixel is drawn or not,
            // by setting the bit to a 1 or 0. To learn more about a stencil buffer,
            // go to: https://docs.microsoft.com/windows/uwp/graphics-concepts/stencil-buffers

            d3dContext->ClearDepthStencilView(m_deviceResources->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);

            // Direct2D -- discussed later
            // Doing the Mono or Left Eye View.
            // As compared to the right eye:
            // m_deviceResources->GetBackBufferRenderTargetView instead of GetBackBufferRenderTargetViewRight
            ID3D11RenderTargetView* const targets[1] = { m_deviceResources->GetBackBufferRenderTargetView() };

            // Same as the Right Eye View.
            d3dContext->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());
            d3dContext->ClearDepthStencilView(m_deviceResources->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);

            // d2d -- Discussed later under Adding UI

        const float clearColor[4] = { 0.5f, 0.5f, 0.8f, 1.0f };

        // Only need to clear the background when not rendering the full 3D scene since
        // the 3D world is a fully enclosed box and the dynamics prevents the camera from
        // moving outside this space.
        if (i > 0)
            // Doing the Right Eye View.
            d3dContext->ClearRenderTargetView(m_deviceResources->GetBackBufferRenderTargetViewRight(), clearColor);
            // Doing the Mono or Left Eye View.
            d3dContext->ClearRenderTargetView(m_deviceResources->GetBackBufferRenderTargetView(), clearColor);

        // Render the scene objects
        if (m_game != nullptr && m_gameResourcesLoaded && m_levelResourcesLoaded)
            // This section is only used after the game state has been initialized and all device
            // resources needed for the game have been created and associated with the game objects.
            if (stereoEnabled)
                // When doing stereo, it is necessary to update the projection matrix once per rendering pass.

                auto orientation = m_deviceResources->GetOrientationTransform3D();

                ConstantBufferChangeOnResize changesOnResize;
                // Apply either a left or right eye projection, which is an offset from the middle
                            i == 0 ?
                            m_game->GameCamera().LeftEyeProjection() :


            // Update variables that change once per frame.
            ConstantBufferChangesEveryFrame constantBufferChangesEveryFrameValue;

            // Set up the graphics pipeline. This sample uses the same InputLayout and set of
            // constant buffers for all shaders, so they only need to be set once per frame.
            // For more info about the graphics or rendering pipeline, see
            // https://docs.microsoft.com/windows/win32/direct3d11/overviews-direct3d-11-graphics-pipeline

            // IASetInputLayout binds an input-layout object to the input-assembler (IA) stage. 
            // Input-layout objects describe how vertex buffer data is streamed into the IA pipeline stage.
            // Set up the Direct3D context to use this vertex layout. For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-iasetinputlayout

            // VSSetConstantBuffers sets the constant buffers used by the vertex shader pipeline stage.
            // Set up the Direct3D context to use these constant buffers. For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-vssetconstantbuffers

            ID3D11Buffer* constantBufferNeverChanges{ m_constantBufferNeverChanges.get() };
            d3dContext->VSSetConstantBuffers(0, 1, &constantBufferNeverChanges);
            ID3D11Buffer* constantBufferChangeOnResize{ m_constantBufferChangeOnResize.get() };
            d3dContext->VSSetConstantBuffers(1, 1, &constantBufferChangeOnResize);
            ID3D11Buffer* constantBufferChangesEveryFrame{ m_constantBufferChangesEveryFrame.get() };
            d3dContext->VSSetConstantBuffers(2, 1, &constantBufferChangesEveryFrame);
            ID3D11Buffer* constantBufferChangesEveryPrim{ m_constantBufferChangesEveryPrim.get() };
            d3dContext->VSSetConstantBuffers(3, 1, &constantBufferChangesEveryPrim);

            // Sets the constant buffers used by the pixel shader pipeline stage. 
            // For more info, see
            // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-pssetconstantbuffers

            d3dContext->PSSetConstantBuffers(2, 1, &constantBufferChangesEveryFrame);
            d3dContext->PSSetConstantBuffers(3, 1, &constantBufferChangesEveryPrim);
            ID3D11SamplerState* samplerLinear{ m_samplerLinear.get() };
            d3dContext->PSSetSamplers(0, 1, &samplerLinear);

            for (auto&& object : m_game->RenderObjects())
                // The 3D object render method handles the rendering.
                // For more info, see Primitive rendering below.
                object->Render(d3dContext, m_constantBufferChangesEveryPrim.get());

        // Start of 2D rendering

基元呈现Primitive rendering

呈现场景时,将循环所有需要呈现的对象。When rendering the scene, you loop through all the objects that need to be rendered. 对每个对象(基元)重复以下步骤。The steps below are repeated for each object (primitive).

  • 用模型的世界变换矩阵和材料信息更新常量缓冲区(m_constantBufferChangesEveryPrim)。Update the constant buffer (m_constantBufferChangesEveryPrim) with the model's world transformation matrix and material information.
  • M_constantBufferChangesEveryPrim包含每个对象的参数。The m_constantBufferChangesEveryPrim contains parameters for each object. 它包括对象到世界转换矩阵以及材料属性(如光照计算的颜色和反射指数)。It includes the object-to-world transformation matrix as well as material properties such as color and specular exponent for lighting calculations.
  • 将 Direct3D 上下文设置为使用要流式传输到呈现管道的输入汇编程序(IA)阶段的网格对象数据的输入顶点布局。Set the Direct3D context to use the input vertex layout for the mesh object data to be streamed into the input-assembler (IA) stage of the rendering pipeline.
  • 将 Direct3D 上下文设置为在 IA 阶段使用索引缓冲区Set the Direct3D context to use an index buffer in the IA stage. 提供基元信息:类型、数据顺序。Provide the primitive info: type, data order.
  • 提交绘图调用来绘制索引的非实例化基元。Submit a draw call to draw the indexed, non-instanced primitive. GameObject::Render 方法使用特定于给定基元的数据更新基元常量缓冲区The GameObject::Render method updates the primitive constant buffer with the data specific to a given primitive. 这将导致在上下文中调用 DrawIndexed,以绘制每个基元的几何图形。This results in a DrawIndexed call on the context to draw the geometry of that each primitive. 尤其是,此绘图调用会随着常量缓冲区数据的参数化将命令和数据编入图形处理单元 (GPU) 的队列。Specifically, this draw call queues commands and data to the graphics processing unit (GPU), as parameterized by the constant buffer data. 每个绘图调用对每个顶点执行一次顶点着色器,然后对基元中每个三角形的每个像素执行一次像素着色器Each draw call executes the vertex shader one time per vertex, and then the pixel shader one time for every pixel of each triangle in the primitive. 纹理是像素着色器用于执行呈现的状态部分。The textures are part of the state that the pixel shader uses to do the rendering.

下面是使用多个常量缓冲区的原因。Here are the reasons for using multiple constant buffers.

  • 游戏使用多个常量缓冲区,但每个基元只需更新一次这些缓冲区。The game uses multiple constant buffers, but it only needs to update these buffers one time per primitive. 如之前所述,常量缓冲区就像对每个基元的着色器的输入。As mentioned earlier, constant buffers are like inputs to the shaders that run for each primitive. 某些数据是静态的(m_constantBufferNeverChanges);某些数据在帧(m_constantBufferChangesEveryFrame)上是常量,如相机的位置;某些数据特定于基元,例如其颜色和纹理(m_constantBufferChangesEveryPrim)。Some data is static (m_constantBufferNeverChanges); some data is constant over the frame (m_constantBufferChangesEveryFrame), such as the position of the camera; and some data is specific to the primitive, such as its color and textures (m_constantBufferChangesEveryPrim).
  • 示例呈现器将这些输入分别放入不同的常量缓冲区,以优化 CPU 和 GPU 使用的内存带宽。The game renderer separates these inputs into different constant buffers to optimize the memory bandwidth that the CPU and GPU use. 此方法还有助于最大程度地减少 GPU 需要跟踪的数据量。This approach also helps to minimize the amount of data that the GPU needs to keep track of. GPU 有一个很大的命令队列,游戏每次调用 Draw 时,该命令将随与之关联的数据一起排队。The GPU has a big queue of commands, and each time the game calls Draw, that command is queued along with the data associated with it. 当游戏更新基元常量缓冲区并发出下一个 Draw 命令时,图形驱动程序会将此下一个命令和关联的数据添加到队列。When the game updates the primitive constant buffer and issues the next Draw command, the graphics driver adds this next command and the associated data to the queue. 如果游戏绘制 100 个基元,它可能在队列中有 100 个常量缓冲区数据的副本。If the game draws 100 primitives, it could potentially have 100 copies of the constant buffer data in the queue. 为了最大程度地减少游戏发送到 GPU 的数据量,游戏使用仅包含每个基元更新的单独基元常量缓冲区。To minimize the amount of data the game is sending to the GPU, the game uses a separate primitive constant buffer that only contains the updates for each primitive.

GameObject::Render 方法GameObject::Render method

void GameObject::Render(
    _In_ ID3D11DeviceContext* context,
    _In_ ID3D11Buffer* primitiveConstantBuffer
    if (!m_active || (m_mesh == nullptr) || (m_normalMaterial == nullptr))

    ConstantBufferChangesEveryPrim constantBuffer;

    // Put the model matrix info into a constant buffer, in world matrix.

    // Check to see which material to use on the object.
    // If a collision (a hit) is detected, GameObject::Render checks the current context, which 
    // indicates whether the target has been hit by an ammo sphere. If the target has been hit, 
    // this method applies a hit material, which reverses the colors of the rings of the target to 
    // indicate a successful hit to the player. Otherwise, it applies the default material 
    // with the same method. In both cases, it sets the material by calling Material::RenderSetup, 
    // which sets the appropriate constants into the constant buffer. Then, it calls 
    // ID3D11DeviceContext::PSSetShaderResources to set the corresponding texture resource for the 
    // pixel shader, and ID3D11DeviceContext::VSSetShader and ID3D11DeviceContext::PSSetShader 
    // to set the vertex shader and pixel shader objects themselves, respectively.

    if (m_hit && m_hitMaterial != nullptr)
        m_hitMaterial->RenderSetup(context, &constantBuffer);
        m_normalMaterial->RenderSetup(context, &constantBuffer);

    // Update the primitive constant buffer with the object model's info.
    context->UpdateSubresource(primitiveConstantBuffer, 0, nullptr, &constantBuffer, 0, 0);

    // Render the mesh.
    // See MeshObject::Render method below.

MeshObject:: Render 方法MeshObject::Render method

void MeshObject::Render(_In_ ID3D11DeviceContext* context)
    // PNTVertex is a struct. stride provides us the size required for all the mesh data
    // struct PNTVertex
    //  DirectX::XMFLOAT3 position;
    //  DirectX::XMFLOAT3 normal;
    //  DirectX::XMFLOAT2 textureCoordinate;
    uint32_t stride{ sizeof(PNTVertex) };
    uint32_t offset{ 0 };

    // Similar to the main render loop.
    // Input-layout objects describe how vertex buffer data is streamed into the IA pipeline stage.
    ID3D11Buffer* vertexBuffer{ m_vertexBuffer.get() };
    context->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);

    // IASetIndexBuffer binds an index buffer to the input-assembler stage.
    // For more info, see
    // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-iasetindexbuffer.
    context->IASetIndexBuffer(m_indexBuffer.get(), DXGI_FORMAT_R16_UINT, 0);

    // Binds information about the primitive type, and data order that describes input data for the input assembler stage.
    // For more info, see
    // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-iasetprimitivetopology.

    // Draw indexed, non-instanced primitives. A draw API submits work to the rendering pipeline.
    // For more info, see
    // https://docs.microsoft.com/windows/win32/api/d3d11/nf-d3d11-id3d11devicecontext-drawindexed.
    context->DrawIndexed(m_indexCount, 0, 0);

DeviceResources::P 重发方法DeviceResources::Present method

我们调用DeviceResources::P 重发方法,以显示我们放入缓冲区中的内容。We call the DeviceResources::Present method to display the contents we've placed in the buffers.

我们为用于向用户显示帧的缓冲区集合使用术语交换链。We use the term swap chain for a collection of buffers that are used for displaying frames to the user. 应用程序每次提供要显示的新帧时,交换链中的第一个缓冲区将替代已显示的缓冲区。Each time an application presents a new frame for display, the first buffer in the swap chain takes the place of the displayed buffer. 此过程称为交换或翻转。This process is called swapping or flipping. 有关详细信息,请参阅交换链For more information, see Swap chains.

  • IDXGISwapChain1接口的当前方法指示DXGI在垂直同步(VSync)发生之前一直阻止,使应用程序进入睡眠状态,直到下一个 VSync 为止。The IDXGISwapChain1 interface's Present method instructs DXGI to block until vertical synchronization (VSync) takes place, putting the application to sleep until the next VSync. 这可以确保不会浪费任何将永远不会显示在屏幕上的循环呈现帧。This ensures that you don't waste any cycles rendering frames that will never be displayed to the screen.
  • ID3D11DeviceContext3接口的DiscardView方法会丢弃呈现器目标的内容。The ID3D11DeviceContext3 interface's DiscardView method discards the contents of the render target. 仅当现有内容将完全被覆盖时此操作才有效。This is a valid operation only when the existing contents will be entirely overwritten. 如果使用脏或 scroll rect,则应删除此调用。If dirty or scroll rects are used, then this call should be removed.
  • 使用相同的 DiscardView 方法,丢弃深度模板的内容。Using the same DiscardView method, discard the contents of the depth-stencil.
  • HandleDeviceLost方法用于管理要删除的设备的方案。The HandleDeviceLost method is used to manage the scenario of the device being removed. 如果通过断开连接或驱动程序升级删除了设备,则必须重新创建所有设备资源。If the device was removed either by a disconnection or a driver upgrade, then you must recreate all device resources. 有关详细信息,请参阅在 Direct3D 11 中处理设备删除方案For more information, see Handle device removed scenarios in Direct3D 11.


若要实现平滑帧速率,必须确保渲染帧的工作量适合于 VSyncs 之间的时间。To achieve a smooth frame rate, you must ensure that the amount of work to render a frame fits in the time between VSyncs.

// Present the contents of the swap chain to the screen.
void DX::DeviceResources::Present()
    // The first argument instructs DXGI to block until VSync, putting the application
    // to sleep until the next VSync. This ensures we don't waste any cycles rendering
    // frames that will never be displayed to the screen.
    HRESULT hr = m_swapChain->Present(1, 0);

    // Discard the contents of the render target.
    // This is a valid operation only when the existing contents will be entirely
    // overwritten. If dirty or scroll rects are used, this call should be removed.

    // Discard the contents of the depth stencil.

    // If the device was removed either by a disconnection or a driver upgrade, we 
    // must recreate all device resources.

后续步骤Next steps

本主题说明了图形在显示器上的呈现方式,并提供了一些使用的渲染词的简短说明(见下)。This topic explained how graphics is rendered on the display, and it provides a short description for some of the rendering terms used (below). 详细了解渲染框架 II:游戏渲染主题,并了解如何在呈现之前准备所需数据。Learn more about rendering in the Rendering framework II: Game rendering topic, and learn how to prepare the data needed before rendering.

术语和概念Terms and concepts

简单的游戏场景Simple game scene

简单的游戏场景由具有若干光源的几个对象构成。A simple game scene is made up of a few objects with several light sources.

对象的形状由空间内的一组 X、Y、Z 坐标定义。An object's shape is defined by a set of X, Y, Z coordinates in space. 游戏世界中的实际呈现位置可以通过将转换矩阵应用于位置 X、Y、Z 坐标来确定。The actual render location in the game world can be determined by applying a transformation matrix to the positional X, Y, Z coordinates. 它还可能具有一组纹理坐标 — 和 V, — 它们指定如何将材料应用于对象。It may also have a set of texture coordinates—U and V—which specify how a material is applied to the object. 这将定义对象的表面属性,并使您能够查看某个对象是否具有粗糙表面(如网球球)或光滑光滑表面(如保龄球球)。This defines the surface properties of the object, and gives you the ability to see whether an object has a rough surface (like a tennis ball), or a smooth glossy surface (like a bowling ball).

呈现框架使用场景和对象信息按帧重新创建场景框架,使其在显示器上处于活动状态。Scene and object info is used by the rendering framework to recreate the scene frame by frame, making it come alive on your display monitor.

呈现管道Rendering pipeline

呈现管道是将三维场景信息转换为屏幕上显示的图像的过程。The rendering pipeline is the process by which 3D scene info is translated to an image displayed on screen. 在 Direct3D 11 中,此管道是可编程的。In Direct3D 11, this pipeline is programmable. 你可以调整各个阶段来支持你的呈现需求。You can adapt the stages to support your rendering needs. 具有常见着色器核心的阶段可通过使用 HLSL 编程语言进行编程。Stages that feature common shader cores are programmable by using the HLSL programming language. 它也称为图形呈现管道,或只是管道It's also known as the graphics rendering pipeline, or simply pipeline.

若要帮助创建此管道,需要熟悉这些详细信息。To help you create this pipeline, you need to be familiar with these details.

有关详细信息,请参阅了解 Direct3D 11 呈现管道图形管道For more information, see Understand the Direct3D 11 rendering pipeline and Graphics pipeline.


HLSL 是用于 DirectX 的高级着色器语言。HLSL is the high-level shader language for DirectX. 使用 HLSL,可以为 Direct3D 管道创建类似于 C 的可编程着色器。Using HLSL, you can create C-like programmable shaders for the Direct3D pipeline. 有关详细信息,请参阅 HLSLFor more information, see HLSL.


着色器可以被视为一组说明,用于确定如何在呈现时显示对象的表面。A shader can be thought of as a set of instructions that determine how the surface of an object appears when rendered. 使用 HLSL 编程的着色器称为 HLSL 着色。Those that are programmed using HLSL are known as HLSL shaders. [HLSL] (#hlsl)着色器的源代码文件具有 .hlsl 文件扩展名。Source code files for [HLSL])(#hlsl) shaders have the .hlsl file extension. 这些着色器可以在生成时或运行时编译,并在运行时设置为相应的管道阶段。These shaders can be compiled at build-time or at runtime, and set at runtime into the appropriate pipeline stage. 已编译的着色器对象具有 .cso 文件扩展名。A compiled shader object has a .cso file extension.

Direct3D 9 着色器可以使用着色器模型1、着色器模型2和着色器模型3设计;Direct3D 10 着色器只能在着色器模型4上设计。Direct3D 9 shaders can be designed using shader model 1, shader model 2 and shader model 3; Direct3D 10 shaders can be designed only on shader model 4. Direct3D 11 着色器可以基于着色器模型 5 设计。Direct3D 11 shaders can be designed on shader model 5. Direct3D 11.3 和 Direct3D 12 可以基于着色器模型 5.1 设计,Direct3D 12 还可以基于着色器模型 6 设计。Direct3D 11.3 and Direct3D 12 can be designed on shader model 5.1, and Direct3D 12 can also be designed on shader model 6.

顶点着色器和像素着色器Vertex shaders and pixel shaders

数据将图形管道作为基元流进入,并由各种着色器(例如顶点着色器和像素着色器)进行处理。Data enters the graphics pipeline as a stream of primitives, and is processed by various shaders such as the vertex shaders and pixel shaders.

顶点着色器处理顶点,通常执行诸如转换、换肤以及照明之类的操作。Vertex shaders processes vertices, typically performing operations such as transformations, skinning, and lighting. 像素着色器支持丰富的着色技术,如每像素照明和后处理。Pixel shaders enables rich shading techniques such as per-pixel lighting and post-processing. 它将常变量、纹理数据、内插的每顶点值和其他数据组合起来以生成每像素输出。It combines constant variables, texture data, interpolated per-vertex values, and other data to produce per-pixel outputs.

着色器阶段Shader stages

定义的用于处理此基元流的各个着色器的序列在呈现管道中称为着色器阶段。A sequence of these various shaders defined to process this stream of primitives is known as shader stages in a rendering pipeline. 实际阶段取决于 Direct3D 的版本,但通常包括顶点、几何图形和像素阶段。The actual stages depend on the version of Direct3D, but usually include the vertex, geometry, and pixel stages. 还有其他一些阶段,如用于分割的外壳着色器和域着色器、计算着色器。There are also other stages, such as the hull and domain shaders for tessellation, and the compute shader. 所有这些阶段都是使用HLSL完全可编程的。All these stages are completely programmable using HLSL. 有关详细信息,请参阅图形管道For more information, see Graphics pipeline.

各着色器文件格式Various shader file formats

下面是着色器代码文件扩展。Here are the shader code file extensions.

  • .hlsl扩展名为 [HLSL] (#hlsl)源代码的文件。A file with the .hlsl extension holds [HLSL])(#hlsl) source code.
  • 扩展名为的文件 .cso 保存已编译的着色器对象。A file with the .cso extension holds a compiled shader object.
  • 扩展名为的文件 .h 是标头文件,但在着色器代码上下文中,此标头文件定义了一个包含着色器数据的字节数组。A file with the .h extension is a header file, but in a shader code context, this header file defines a byte array that holds shader data.
  • 扩展名为的文件 .hlsli 包含常量缓冲区的格式。A file with the .hlsli extension contains the format of the constant buffers. 在示例游戏中,该文件为着色 > ConstantBuffers. .hlsliIn the sample game, the file is Shaders > ConstantBuffers.hlsli.


您可以通过 .cso 在运行时加载文件或通过 .h 在可执行代码中添加文件来嵌入着色器。You embed a shader either by loading a .cso file at runtime, or by adding a .h file in your executable code. 但对于同一着色器,不会同时使用这两种方法。But you wouldn't use both for the same shader.

更深入地了解 DirectXDeeper understanding of DirectX

Direct3D 11 是一组 Api,可帮助我们为图形密集型应用程序(如游戏)创建图形,在这些应用程序中,我们希望使用合适的图形卡来处理密集型计算。Direct3D 11 is a set of APIs that can help us to create graphics for graphics intensive applications such as games, where we want to have a good graphics card to process intensive computation. 此部分简要说明 Direct3D 11 图形编程概念:资源、子资源、设备和设备上下文。This section briefly explains the Direct3D 11 graphics programming concepts: resource, subresource, device, and device context.


可以将资源(也称为设备资源)视为有关如何呈现对象的信息,如纹理、位置或颜色。You can think of resources (also known as device resources) as info about how to render an object, such as texture, position, or color. 资源将数据提供给管道,并定义场景中呈现的内容。Resources provide data to the pipeline, and define what is rendered during your scene. 可以从游戏媒体加载资源,也可以在运行时动态创建资源。Resources can be loaded from your game media, or created dynamically at run time.

资源实际上是内存中可由 Direct3D 管道访问的区域。A resource is, in fact, an area in memory that can be accessed by the Direct3D pipeline. 为了使管道能够高效地访问内存,提供给管道的数据(如输入几何图形、着色器资源和纹理)必须存储在资源中。In order for the pipeline to access memory efficiently, data that is provided to the pipeline (such as input geometry, shader resources, and textures) must be stored in a resource. 所有 Direct3D 资源都存储在两种类型的资源中:缓冲区或纹理。There are two types of resources from which all Direct3D resources derive: a buffer or a texture. 每个管道阶段最多可以有 128 个活动资源。Up to 128 resources can be active for each pipeline stage. 有关详细信息,请参阅资源For more information, see Resources.


术语子资源指资源的子集。The term subresource refers to a subset of a resource. Direct3D 可以引用整个资源,也可以引用资源的子集。Direct3D can reference an entire resource, or it can reference subsets of a resource. 有关详细信息,请参阅子资源For more information, see Subresource.


深度模板资源包含用于保留深度和模板信息的格式和缓冲区。A depth-stencil resource contains the format and buffer to hold depth and stencil information. 其使用纹理资源创建。It is created using a texture resource. 有关如何创建深度模板资源的详细信息,请参阅配置深度模板功能For more information on how to create a depth-stencil resource, see Configuring Depth-Stencil Functionality. 我们通过使用 ID3D11DepthStencilView 界面实现的深度模板视图访问深度模板资源。We access the depth-stencil resource through the depth-stencil view implemented using the ID3D11DepthStencilView interface.

深度信息告诉我们多边形的哪些区域位于其他区域,以便我们可以确定哪些是隐藏的。Depth info tells us which areas of polygons are behind others, so that we can determine which are hidden. 模板信息告知我们会屏蔽哪些像素。Stencil info tells us which pixels are masked. 可以使用它来产生特殊效果,因为它确定是否绘制某个像素;将位设置为 1 或 0。It can be used to produce special effects since it determines whether a pixel is drawn or not; sets the bit to a 1 or 0.

有关详细信息,请参阅深度模具视图深度缓冲区模具缓冲区For more information, see Depth-stencil view, depth buffer, and stencil buffer.

呈现器目标Render target

呈现器目标是我们可以在呈现传递末尾写入的资源。A render target is a resource that we can write to at the end of a render pass. 它通常使用 ID3D11Device::CreateRenderTargetView 方法、将交换链后台缓冲区(也是一种资源)用作输入参数来创建。It is commonly created using the ID3D11Device::CreateRenderTargetView method using the swap chain back buffer (which is also a resource) as the input parameter.

每个呈现器目标还应有一个相应的深度模板视图,因为当我们在使用呈现目标前使用 OMSetRenderTargets 设置它时,它也需要深度模板视图。Each render target should also have a corresponding depth-stencil view because when we use OMSetRenderTargets to set the render target before using it, it requires also a depth-stencil view. 我们通过使用 ID3D11RenderTargetView 界面实现的呈现器目标视图访问呈现器目标资源。We access the render target resource through the render target view implemented using the ID3D11RenderTargetView interface.


您可以想像一台设备来分配和销毁对象,呈现基元,并通过图形驱动程序与图形卡通信。You can imagine a device as a way to allocate and destroy objects, render primitives, and communicate with the graphics card through the graphics driver.

更准确地说,Direct3D 设备是 Direct3D 的渲染组件。For a more precise explanation, a Direct3D device is the rendering component of Direct3D. 设备封装并存储渲染状态,执行转换和照明操作,并将图像光栅化到表面。A device encapsulates and stores the rendering state, performs transformations and lighting operations, and rasterizes an image to a surface. 有关详细信息,请参阅设备For more information, see Devices

设备由ID3D11Device接口表示。A device is represented by the ID3D11Device interface. 换句话说, ID3D11Device接口表示虚拟显示适配器,用于创建设备所拥有的资源。In other words, the ID3D11Device interface represents a virtual display adapter, and is used to create resources that are owned by a device.

ID3D11Device 有不同的版本。There are different versions of ID3D11Device. ID3D11Device5是最新版本,并将新方法添加到ID3D11Device4中的新方法。ID3D11Device5 is the latest version, and adds new methods to those in ID3D11Device4. 有关 Direct3D 如何与基础硬件通信的详细信息,请参阅 Windows 设备驱动程序 (WDDM) 体系结构For more information on how Direct3D communicates with the underlying hardware, see Windows Device Driver Model (WDDM) architecture.

每个应用程序都必须至少有一个设备;大多数应用程序仅创建一个。Each application must have at least one device; most applications create only one. 通过调用D3D11CreateDeviceD3D11CreateDeviceAndSwapChain并指定带有D3D_DRIVER_TYPE标志的驱动程序类型,为安装在计算机上的某个硬件驱动程序创建设备。Create a device for one of the hardware drivers installed on your machine by calling D3D11CreateDevice or D3D11CreateDeviceAndSwapChain and specifying the driver type with the D3D_DRIVER_TYPE flag. 每台设备可以使用一个或多个设备上下文,具体取决于所需的功能。Each device can use one or more device contexts, depending on the functionality desired. 有关详细信息,请参阅 D3D11CreateDevice 函数For more information, see D3D11CreateDevice function.

设备上下文Device context

设备上下文用于设置管道状态,并使用设备拥有的资源生成呈现命令。A device context is used to set pipeline state, and generate rendering commands using the resources owned by a device.

Direct3D 11 实现两种类型的设备上下文,一个用于立即呈现,另一个用于延迟呈现;两个上下文均使用 ID3D11DeviceContext 界面表示。Direct3D 11 implements two types of device contexts, one for immediate rendering and the other for deferred rendering; both contexts are represented with an ID3D11DeviceContext interface.

ID3D11DeviceContext 界面具有不同版本;ID3D11DeviceContext4ID3D11DeviceContext3 中的界面添加新方法。The ID3D11DeviceContext interfaces have different versions; ID3D11DeviceContext4 adds new methods to those in ID3D11DeviceContext3.

Windows 10 创意者更新中引入了ID3D11DeviceContext4 ,是ID3D11DeviceContext接口的最新版本。ID3D11DeviceContext4 is introduced in the Windows 10 Creators Update, and is the latest version of the ID3D11DeviceContext interface. 面向 Windows 10 创意者更新及更高版本的应用程序应使用此接口而不是早期版本。Applications targeting Windows 10 Creators Update and later should use this interface instead of earlier versions. 有关详细信息,请参阅 ID3D11DeviceContext4For more information, see ID3D11DeviceContext4.


DX::D eviceresources类在DeviceResources / .h文件中,并控制所有 DirectX 设备资源。The DX::DeviceResources class is in the DeviceResources.cpp/.h files, and controls all of DirectX device resources.


缓冲区资源是一系列完全类型化的数据,被分组到元素中。A buffer resource is a collection of fully typed data grouped into elements. 你可以使用缓冲区来存储各类数据,包括位置矢量、法向矢量、顶点缓冲区中的纹理坐标、索引缓冲区中的索引或设备状态。You can use buffers to store a wide variety of data, including position vectors, normal vectors, texture coordinates in a vertex buffer, indexes in an index buffer, or device state. Buffer 元素可以包含打包的数据值(如R8G8B8A8 surface 值)、单个8位整数或 4 32 位浮点值。Buffer elements can include packed data values (such as R8G8B8A8 surface values), single 8-bit integers, or four 32-bit floating point values.

有三种类型的可用缓冲区:顶点缓冲区、索引缓冲区和常量缓冲区。There are three types of buffers available: vertex buffer, index buffer, and constant buffer.

顶点缓冲区Vertex buffer

包含用于定义几何图形的顶点数据。Contains the vertex data used to define your geometry. 顶点数据包括位置坐标、颜色数据、纹理坐标数据、法线数据等。Vertex data includes position coordinates, color data, texture coordinate data, normal data, and so on.

索引缓冲区Index buffer

包含到顶点缓冲区的整数偏移量,用于更高效地渲染基元。Contains integer offsets into vertex buffers and are used to render primitives more efficiently. 索引缓冲区包含一组连续的 16 位或 32 位索引;每个索引用于标识顶点缓冲区中的一个顶点。An index buffer contains a sequential set of 16-bit or 32-bit indices; each index is used to identify a vertex in a vertex buffer.

常量缓冲区或着色器常量缓冲区Constant buffer, or shader-constant buffer

让你能够高效地向管道提供着色器数据。Allows you to efficiently supply shader data to the pipeline. 你可以将常量缓冲区用作为每个基元运行以及存储呈现管道的流输出阶段结果的着色器的输入。You can use constant buffers as inputs to the shaders that run for each primitive and store results of the stream-output stage of the rendering pipeline. 从概念上讲,常量缓冲区看起来就像单元素顶点缓冲区。Conceptually, a constant buffer looks just like a single-element vertex buffer.

缓冲区的设计和实现Design and implementation of buffers

您可以基于数据类型设计缓冲区,例如,在我们的示例游戏中,为静态数据创建一个缓冲区,为静态数据创建一个缓冲区,对特定于基元的数据使用另一个缓冲区。You can design buffers based on the data type, for example, like in our sample game, one buffer is created for static data, another for data that's constant over the frame, and another for data that's specific to a primitive.

所有缓冲区类型由 ID3D11Buffer 界面封装,你可以通过调用 ID3D11Device::CreateBuffer 创建缓冲区资源。All buffer types are encapsulated by the ID3D11Buffer interface and you can create a buffer resource by calling ID3D11Device::CreateBuffer. 但缓冲区必须绑定到管道后才能访问。But a buffer must be bound to the pipeline before it can be accessed. 缓冲区可以同时绑定到多个管道阶段来用于读取。Buffers can be bound to multiple pipeline stages simultaneously for reading. 缓冲区还可以绑定到单个管道阶段以进行写入;但是,同一缓冲区不能同时绑定到读取和写入。A buffer can also be bound to a single pipeline stage for writing; however, the same buffer cannot be bound for both reading and writing simultaneously.

可以通过这些方式绑定缓冲区。You can bind buffers in these ways.

  • 通过调用ID3D11DeviceContext方法(如ID3D11DeviceContext:: IASetVertexBuffersID3D11DeviceContext:: IASetIndexBuffer)到输入汇编程序阶段。To the input-assembler stage by calling ID3D11DeviceContext methods such as ID3D11DeviceContext::IASetVertexBuffers and ID3D11DeviceContext::IASetIndexBuffer.
  • 到流输出阶段,方法是调用ID3D11DeviceContext:: SOSetTargetsTo the stream-output stage by calling ID3D11DeviceContext::SOSetTargets.
  • 到着色器阶段,方法是调用着色器方法,如ID3D11DeviceContext:: VSSetConstantBuffersTo the shader stage by calling shader methods, such as ID3D11DeviceContext::VSSetConstantBuffers.

有关详细信息,请参阅 Direct3D 11 中的缓冲区简介For more information, see Introduction to buffers in Direct3D 11.


Microsoft DirectX 图形基础结构(DXGI)是一个子系统,它封装了 Direct3D 所需的一些低级别任务。Microsoft DirectX Graphics Infrastructure (DXGI) is a subsystem that encapsulates some of the low-level tasks that are needed by Direct3D. 在多线程应用程序中使用 DXGI 时必须特别注意,以确保不会发生死锁。Special care must be taken when using DXGI in a multithreaded application to ensure that deadlocks don't occur. 有关详细信息,请参阅多线程和 DXGIFor more info, see Multithreading and DXGI

功能级别Feature level

功能级别是 Direct3D 11 中引入的概念,用于处理新的或现有计算机中的各类视频卡。Feature level is a concept introduced in Direct3D 11 to handle the diversity of video cards in new and existing machines. 功能级别是一组定义完善的图形处理单元(GPU)功能。A feature level is a well-defined set of graphics processing unit (GPU) functionality.

每个视频卡根据所安装的 GPU 来实现特定级别的 DirectX 功能。Each video card implements a certain level of DirectX functionality depending on the GPUs installed. 在以前版本的 Microsoft Direct3D 中,你可以找到视频卡实现的 Direct3D 版本,然后相应地对应用程序编程。In prior versions of Microsoft Direct3D, you could find out the version of Direct3D the video card implemented, and then program your application accordingly.

使用功能级别,在创建设备时,你可以尝试为想要请求的功能级别创建设备。With feature level, when you create a device, you can attempt to create a device for the feature level that you want to request. 如果设备创建成功,该功能级别将存在,如果失败,硬件将不支持该功能级别。If the device creation works, that feature level exists, if not, the hardware does not support that feature level. 可以尝试在较低的功能级别重新创建设备,也可以选择退出应用程序。You can either try to recreate a device at a lower feature level, or you can choose to exit the application. 例如,12_0 功能级别需要 Direct3D 11.3 或 Direct3D 12 以及着色器模型5.1。For instance, the 12_0 feature level requires Direct3D 11.3 or Direct3D 12, and shader model 5.1. 有关详细信息,请参阅 Direct3D 功能级别:各功能级别概述For more information, see Direct3D feature levels: Overview for each feature level.

使用功能级别,你可以开发适用于 Direct3D 9、Microsoft Direct3D 10 或 Direct3D 11 的应用程序,然后在 9、10 或 11 硬件上运行应用程序(除一些例外情况)。Using feature levels, you can develop an application for Direct3D 9, Microsoft Direct3D 10, or Direct3D 11, and then run it on 9, 10, or 11 hardware (with some exceptions). 有关详细信息,请参阅 Direct3D 功能级别For more information, see Direct3D feature levels.

立体呈现Stereo rendering

立体呈现用于增强深度的视觉效果。Stereo rendering is used to enhance the illusion of depth. 它使用两个图像,一个从左眼、另一个从右眼来在显示屏幕上显示场景。It uses two images, one from the left eye and the other from the right eye to display a scene on the display screen.

通过数学方式,我们应用常规单一投影矩阵的立体投影矩阵(稍微向右或向左水平偏移)来实现此目的。Mathematically, we apply a stereo projection matrix, which is a slight horizontal offset to the right and to the left, of the regular mono projection matrix to achieve this.

我们执行了两个渲染阶段来实现此示例游戏中的立体声渲染。We did two rendering passes to achieve stereo rendering in this sample game.

  • 绑定到右侧的呈现器目标,应用右投影,然后绘制基元对象。Bind to right render target, apply right projection, then draw the primitive object.
  • 绑定到左侧的呈现器目标,应用右投影,然后绘制基元对象。Bind to left render target, apply left projection, then draw the primitive object.

相机和坐标空间Camera and coordinate space

该游戏有现成的代码用于更新本身坐标系中的世界(有时称为世界空间或场景空间)。The game has the code in place to update the world in its own coordinate system (sometimes called the world space or scene space). 所有对象(包括相机)都在此空间定位和确定方向。All objects, including the camera, are positioned and oriented in this space. 有关详细信息,请参阅坐标系统For more information, see Coordinate systems.

顶点着色器使用以下算法执行从模型坐标到设备坐标的转换(其中 V 是一个矢量,M 是一个矩阵)。A vertex shader does the heavy lifting of converting from the model coordinates to device coordinates with the following algorithm (where V is a vector and M is a matrix).

V(device) = V(model) x M(model-to-world) x M(world-to-view) x M(view-to-device)

  • M(model-to-world)是模型坐标到世界坐标(也称为世界变换矩阵)的变换矩阵。M(model-to-world) is a transformation matrix for model coordinates to world coordinates, also known as the World transform matrix. 这由基元提供。This is provided by the primitive.
  • M(world-to-view)是用于全局坐标的变换矩阵,它也称为视图转换矩阵M(world-to-view) is a transformation matrix for world coordinates to view coordinates, also known as the View transform matrix.
    • 这由相机的视图矩阵提供。This is provided by the view matrix of the camera. 它由相机的位置定义,以及外观向量(从相机直接指向场景的外观向量,以及与之垂直的查找向量)。It's defined by the camera's position along with the look vectors (the look at vector that points directly into the scene from the camera, and the look up vector that is upwards perpendicular to it).
    • 在示例游戏中, m_viewMatrix是视图变换矩阵,并使用摄像:: SetViewParams进行计算。In the sample game, m_viewMatrix is the view transformation matrix, and is calculated using Camera::SetViewParams.
  • M(view-to-device)是视图坐标到设备坐标的变换矩阵,也称为投影转换矩阵M(view-to-device) is a transformation matrix for view coordinates to device coordinates, also known as the Projection transform matrix.
    • 这由相机的投影提供。This is provided by the projection of the camera. 它提供有关在最终场景中实际可见空间的信息。It provides info about how much of that space is actually visible in the final scene. 视图的字段(FoV)、纵横比和剪辑平面定义投影转换矩阵。The field of view (FoV), aspect ratio, and clipping planes define the projection transform matrix.
    • 在示例游戏中, m_projectionMatrix定义转换为投影坐标,并使用摄像:: SetProjParams进行计算(对于立体声投影,为 — 每个眼睛的视图使用两个投影矩阵)。In the sample game, m_projectionMatrix defines transformation to the projection coordinates, calculated using Camera::SetProjParams (For stereo projection, you use two projection matrices—one for each eye's view).

中的着色器代码 VertexShader.hlsl 将与常量缓冲区中的向量和矩阵一起加载,并对每个顶点执行此转换。The shader code in VertexShader.hlsl is loaded with these vectors and matrices from the constant buffers, and performs this transformation for every vertex.

坐标转换Coordinate transformation

Direct3D 使用三个转换来将 3D 模型坐标更改为像素坐标(屏幕空间)。Direct3D uses three transformations to change your 3D model coordinates into pixel coordinates (screen space). 这些转换是世界转换、视图转换和投影转换。These transformations are world transform, view transform, and projection transform. 有关详细信息,请参阅转换概述For more info, see Transform overview.

世界转换矩阵World transform matrix

世界转换将模型空间(在此空间内,顶点是相对于模型的本地原点定义的)的坐标更改为世界空间(在此空间中,顶点是相对场景中所有对象共同的原点定义的)。A world transform changes coordinates from model space, where vertices are defined relative to a model's local origin, to world space, where vertices are defined relative to an origin common to all the objects in a scene. 本质上,世界转换将模型放入世界空间中;然后是它的名称。In essence, the world transform places a model into the world; hence its name. 有关详细信息,请参阅世界转换For more information, see World transform.

视图转换矩阵View transform matrix

视图转换在世界空间中定位查看器,并将顶点转换为相机空间。The view transform locates the viewer in world space, transforming vertices into camera space. 在相机空间中,相机或查看器位于原点,并面向正 z 方向。In camera space, the camera, or viewer, is at the origin, looking in the positive z-direction. 有关详细信息,请转到视图转换For more info, go to View transform.

投影转换矩阵Projection transform matrix

投影转换可将视锥转换为长方体形状。The projection transform converts the viewing frustum to a cuboid shape. 视锥是场景中相对于视区的相机放置的 3D 体。A viewing frustum is a 3D volume in a scene positioned relative to the viewport's camera. 视区是将 3D 场景投影到其中的 2D 矩形。A viewport is a 2D rectangle into which a 3D scene is projected. 有关详细信息,请参阅视区和剪切For more information, see Viewports and clipping

由于视锥的近端小于远端,这将产生拉伸靠近相机的对象的效果;这是透视应用于场景的方式。Because the near end of the viewing frustum is smaller than the far end, this has the effect of expanding objects that are near to the camera; this is how perspective is applied to the scene. 因此,靠近播放机的对象显示得更大;更远离的对象显得更小。So objects that are closer to the player appear larger; objects that are further away appear smaller.

从数学上来说,投影转换是一种通常是刻度和透视投影的矩阵。Mathematically, the projection transform is a matrix that is typically both a scale and a perspective projection. 它像相机镜头一样工作。It functions like the lens of a camera. 有关详细信息,请参阅投影转换For more information, see Projection transform.

取样器状态Sampler state

取样器状态确定如何使用纹理寻址模式、筛选和详细级别对纹理数据采样。Sampler state determines how texture data is sampled using texture addressing modes, filtering, and level of detail. 每次从纹理中读取纹理像素(或纹素)时,都将执行采样。Sampling is done each time a texture pixel (or texel) is read from a texture.

纹理包含纹素的数组。A texture contains an array of texels. 每个纹素的位置由表示 (u,v) ,其中 u 是宽度, v 是高度,并基于纹理宽度和高度在0和1之间进行映射。The position of each texel is denoted by (u,v), where u is the width and v is the height, and is mapped between 0 and 1 based on the texture width and height. 生成的纹理坐标用于在对纹理采样时对纹素寻址。The resulting texture coordinates are used to address a texel when sampling a texture.

当纹理坐标小于 0 或大于 1 时,纹理寻址模式定义纹理坐标如何寻址纹素位置。When texture coordinates are below 0 or above 1, the texture address mode defines how the texture coordinate addresses a texel location. 例如,在使用 TextureAddressMode.Clamp 时,任何 0-1 范围以外的坐标在采样之前都将被限定为最大值为 1,最小值为 0。For example, when using TextureAddressMode.Clamp, any coordinate outside the 0-1 range is clamped to a maximum value of 1, and minimum value of 0 before sampling.

如果纹理太大或太小而无法用于多边形,则会对纹理进行筛选以适合空间。If the texture is too large or too small for the polygon, then the texture is filtered to fit the space. 放大筛选器放大纹理、缩小筛选器缩小纹理以适应更小的区域。A magnification filter enlarges a texture, a minification filter reduces the texture to fit into a smaller area. 纹理放大为生成更模糊图像的一个或多个地址重复示例纹素。Texture magnification repeats the sample texel for one or more addresses which yields a blurrier image. 纹理缩减更复杂,因为它需要将多个纹素值组合为单个值。Texture minification is more complicated because it requires combining more than one texel values into a single value. 这可能导致失真或锯齿形边缘,具体取决于纹理数据。This can cause aliasing or jagged edges depending on the texture data. 最受欢迎的缩小方法是使用 mipmap。The most popular approach for minification is to use a mipmap. Mipmap 是多级纹理。A mipmap is a multi-level texture. 每个级别的大小为2的幂,小于上一级别到1x1 纹理。The size of each level is a power of 2 smaller than the previous level down to a 1x1 texture. 使用缩小时,游戏选择最接近呈现时所需大小的 mipmap 级别。When minification is used, a game chooses the mipmap level closest to the size that is needed at render time.

BasicLoader 类The BasicLoader class

BasicLoader 是一个简单的加载器类,它为从磁盘上的文件加载着色器、纹理和网格提供支持。BasicLoader is a simple loader class that provides support for loading shaders, textures, and meshes from files on disk. 它提供同步和异步方法。It provides both synchronous and asynchronous methods. 在此示例游戏中, BasicLoader.h/.cpp 文件位于 "实用工具" 文件夹中。In this sample game, the BasicLoader.h/.cpp files are found in the Utilities folder.

有关详细信息,请参阅基本加载器For more information, see Basic Loader.