將螢幕擷取到影片Screen capture to video

本文說明如何使用 Windows. 圖形將 Api 編碼的畫面格編碼至影片檔案。This article describes how to encode frames captured from the screen with the Windows.Graphics.Capture APIs to a video file. 如需螢幕捕捉(仍為影像)的相關資訊,請參閱 Screeen captureFor information on screen capturing still images, see Screeen capture. 如需簡單的端對端範例應用程式,以利用本文中所示的概念和技術,請參閱 SimpleRecorderFor a simple end-to-end sample app that utilizes the concepts and techniques shown in this article, see SimpleRecorder.

影片捕獲流程總覽Overview of the video capture process

本文提供範例應用程式的逐步解說,此應用程式會將視窗內容記錄至影片檔案。This article provides a walkthrough of an example app that records the contents of a window to a video file. 雖然可能需要很多程式碼來執行此案例,但螢幕錄製器應用程式的高階結構相當簡單。While it may seem like there is a lot of code required to implement this scenario, the high-level structure of a screen recorder app is fairly simple. 螢幕擷取畫面進程會使用三個主要 UWP 功能:The screen capture process uses three primary UWP features:

本文中顯示的範例程式碼可分類成幾個不同的工作:The example code shown in this article can be categorized into a few different tasks:

  • 初始化 -這包括設定上述的 UWP 類別、初始化圖形裝置介面、挑選要捕捉的視窗,以及設定編碼參數,例如解析度和畫面播放速率。Initialization - This includes configuring the UWP classes described above, initializing the graphics device interfaces, picking a window to capture, and setting up the encoding parameters such as resolution and frame rate.
  • 事件處理常式和執行緒-主要 capture 迴圈的主要驅動程式是透過SampleRequested事件定期要求框架的 >mediastreamsourceEvent handlers and threading - The primary driver of the main capture loop is the MediaStreamSource which requests frames periodically through the SampleRequested event. 此範例會使用事件來協調範例不同元件之間新框架的要求。This example uses events to coordinate the requests for new frames between the different components of the example. 同步處理對於允許同時捕捉及編碼框架而言很重要。Synchronization is important to allow frames to be captured and encoded simultaneously.
  • 複製框架 -畫面格會從 capture 框架緩衝區複製到個別的 Direct3D 介面,此介面可傳遞至 >mediastreamsource ,以便在編碼時不會覆寫該資源。Copying frames - Frames are copied from the capture frame buffer into a separate Direct3D surface that can be passed to the MediaStreamSource so that the resource isn't overwritten while being encoded. Direct3D Api 是用來快速執行這種複製操作。Direct3D APIs are used to perform this copy operation quickly.

關於 Direct3D ApiAbout the Direct3D APIs

如上所述,每個已捕獲框架的複製可能是本文中所示之實作為最複雜的部分。As stated above, the copying of each captured frame is probably the most complex part of the implementation shown in this article. 在低層級中,這項作業是使用 Direct3D 完成的。At a low level, this operation is done using Direct3D. 在此範例中,我們使用 SharpDX 程式庫從 c # 執行 Direct3D 作業。For this example, we are using the SharpDX library to perform the Direct3D operations from C#. 此程式庫已不再正式支援,但已選擇,因為低層級複製作業的效能適用于此案例。This library is no longer officially supported, but it was chosen because it's performance at low-level copy operations is well-suited for this scenario. 我們試著盡可能將 Direct3D 作業保持為離散,讓您更輕鬆地以自己的程式碼或其他程式庫取代這些工作。We have tried to keep the Direct3D operations as discrete as possible to make it easier for you to substitute your own code or other libraries for these tasks.

設定您的專案Setting up your project

本逐步解說中的範例程式碼是使用 Visual Studio 2019 中 ** (通用 Windows) ** c # 專案範本的空白應用程式所建立。The example code in this walkthrough was created using the Blank App (Universal Windows) C# project template in Visual Studio 2019. 若要在您的應用程式中使用 Windows. . a api,您必須在專案的 package.appxmanifest 檔案中包含 圖形捕捉 功能。In order to use the Windows.Graphics.Capture APIs in your app, you must include the Graphics Capture capability in the Package.appxmanifest file for your project. 此範例會將產生的影片檔案儲存至裝置上的影片庫。This example saves generated video files to the Videos Library on the device. 若要存取這個資料夾,您必須包含影片 功能。To access this folder you must include the Videos Library capability.

若要安裝 SharpDX Nuget 套件,請在 Visual Studio 選取 [ 管理 Nuget 套件]。To install the SharpDX Nuget package, in Visual Studio select Manage Nuget Packages. 在 [流覽] 索引標籤中,搜尋 "SharpDX. Direct3D11" 套件,然後按一下 [ 安裝]。In the Browse tab search for the "SharpDX.Direct3D11" package and click Install.

請注意,為了縮減本文中的程式代碼清單大小,下列逐步解說中的程式碼會省略明確的命名空間參考,以及以前置底線 "" 命名的 MainPage 類別成員變數宣告。Note that in order to reduce the size of the code listings in this article, the code in the walkthrough below omits explicit namespace references and the declaration of MainPage class member variables which are named with a leading underscore, "".

編碼的設定Setup for encoding

本節中所述的 SetupEncoding 方法會初始化一些主要物件,這些物件將用來捕捉及編碼影片畫面,並設定所捕獲影片的編碼參數。The SetupEncoding method described in this section initializes some of the main objects that will be used to capture and encode video frames and sets up the encoding parameters for captured video. 您可以透過程式設計方式呼叫這個方法,或回應使用者的互動,例如按一下按鈕。This method could be called programmatically or in response to a user interaction like a button click. SetupEncoding的程式代碼清單顯示在初始化步驟的說明之後。The code listing for SetupEncoding is shown below after the descriptions of the initialization steps.

  • 檢查是否有取得支援。Check for capture support. 開始進行捕捉程式之前,您必須先呼叫 GraphicsCaptureSession IsSupported ,以確定目前的裝置支援螢幕擷取畫面功能。Before beginning the capture process, you need to call GraphicsCaptureSession.IsSupported to make sure that the screen capture feature is supported on the current device.

  • 初始化 Direct3D 介面。Initialize Direct3D interfaces. 此範例會使用 Direct3D 將從螢幕捕獲的圖元複製到編碼為影片框架的材質。This sample uses Direct3D to copy the pixels captured from the screen into a texture that is encoded as a video frame. 本文稍後會顯示用來初始化 Direct3D 介面( CreateD3DDeviceCreateSharpDXDevice)的 helper 方法。The helper methods used to initialize the Direct3D interfaces, CreateD3DDevice and CreateSharpDXDevice, are shown later in this article.

  • 將 GraphicsCaptureItem 初始化。Initialize a GraphicsCaptureItem. GraphicsCaptureItem代表螢幕上要被捕獲的專案,可能是視窗或整個畫面。A GraphicsCaptureItem represents an item on the screen that is going to be captured, either a window or the entire screen. 允許使用者藉由建立 GraphicsCapturePicker 並呼叫 PickSingleItemAsync來挑選要捕捉的專案。Allow the user to pick an item to capture by creating a GraphicsCapturePicker and calling PickSingleItemAsync.

  • 建立組合材質。Create a composition texture. 建立材質資源和將用來複製每個影片畫面格的相關聯轉譯目標視圖。Create a texture resource and an associated render target view that will be used to copy each video frame. 您必須在建立 GraphicsCaptureItem 並知道其維度之後,才能建立此材質。This texture can't be created until the GraphicsCaptureItem has been created and we know its dimensions. 請參閱 WaitForNewFrame 的描述,以瞭解如何使用這個組合材質。See the description of the WaitForNewFrame to see how this composition texture is used. 本文稍後也會顯示建立此材質的 helper 方法。The helper method for creating this texture is also shown later in this article.

  • 建立 MediaEncodingProfile 和 VideoStreamDescriptor。Create a MediaEncodingProfile and VideoStreamDescriptor. >mediastreamsource類別的實例會取得從畫面中取出的影像,並將其編碼成影片串流。An instance of the MediaStreamSource class will take images captured from the screen and encode them into a video stream. 然後,影片串流將會由 MediaTranscoder 類別轉碼至影片檔案。Then, the video stream will be transcoded into a video file by the MediaTranscoder class. VideoStreamDecriptor>mediastreamsource提供編碼參數,例如解析度和畫面播放速率。A VideoStreamDecriptor provides encoding parameters, such as resolution and frame rate, for the MediaStreamSource. MediaTranscoder的影片檔案編碼參數會以MediaEncodingProfile來指定。The video file encoding parameters for the MediaTranscoder are specified with a MediaEncodingProfile. 請注意,用於影片編碼的大小不一定要與所要捕捉的視窗大小相同,但為了讓這個範例保持簡單,編碼設定是硬式編碼,以使用 capture 專案的實際維度。Note that the size used for video encoding doesn't have to be the same as the size of the window being captured, but to keep this example simple, the encoding settings are hard-coded to use the capture item's actual dimensions.

  • 建立 >mediastreamsource 和 MediaTranscoder 物件。Create the MediaStreamSource and MediaTranscoder objects. 如先前所述, >mediastreamsource 物件會將個別的框架編碼成影片串流。As mentioned above, the MediaStreamSource object encodes individual frames into a video stream. 呼叫這個類別的函式,並傳入在上一個步驟中建立的 MediaEncodingProfileCall the constructor for this class, passing in the MediaEncodingProfile created in the previous step. 將緩衝區時間設定為零,並註冊 啟動SampleRequested 事件的處理常式,這將在本文稍後顯示。Set the buffer time to zero and register handlers for the Starting and SampleRequested events, which will be shown later in this article. 接下來,請建立 MediaTranscoder 類別的新實例,並啟用硬體加速。Next, construct a new instance of the MediaTranscoder class and enable hardware acceleration.

  • 建立輸出 檔此方法中的最後一個步驟,是建立將轉碼影片的目標檔案。Create an output file The final step in this method is to create a file to which the video will be transcoded. 在此範例中,我們只會在裝置上的影片庫資料夾中建立唯一的命名檔案。For this example, we will just create a uniquely named file in the Videos Library folder on the device. 請注意,為了存取此資料夾,您的應用程式必須在應用程式資訊清單中指定「影片庫」功能。Note that in order to access this folder, your app must specify the "Videos Library" capability in the app manifest. 建立檔案之後,請開啟檔案進行讀取和寫入,然後將產生的資料流程傳遞至 EncodeAsync 方法,以顯示下一步。Once the file has been created, open it for read and write, and pass the resulting stream into the EncodeAsync method which will be shown next.

private async Task SetupEncoding()
{
    if (!GraphicsCaptureSession.IsSupported())
    {
        // Show message to user that screen capture is unsupported
        return;
    }

    // Create the D3D device and SharpDX device
    if (_device == null)
    {
        _device = Direct3D11Helpers.CreateD3DDevice();
    }
    if (_sharpDxD3dDevice == null)
    {
        _sharpDxD3dDevice = Direct3D11Helpers.CreateSharpDXDevice(_device);
    }
    


    try
    {
        // Let the user pick an item to capture
        var picker = new GraphicsCapturePicker();
        _captureItem = await picker.PickSingleItemAsync();
        if (_captureItem == null)
        {
            return;
        }

        // Initialize a blank texture and render target view for copying frames, using the same size as the capture item
        _composeTexture = Direct3D11Helpers.InitializeComposeTexture(_sharpDxD3dDevice, _captureItem.Size);
        _composeRenderTargetView = new SharpDX.Direct3D11.RenderTargetView(_sharpDxD3dDevice, _composeTexture);

        // This example encodes video using the item's actual size.
        var width = (uint)_captureItem.Size.Width; 
        var height = (uint)_captureItem.Size.Height;

        // Make sure the dimensions are are even. Required by some encoders.
        width = (width % 2 == 0) ? width : width + 1;
        height = (height % 2 == 0) ? height : height + 1;


        var temp = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.HD1080p);
        var bitrate = temp.Video.Bitrate;
        uint framerate = 30;

        _encodingProfile = new MediaEncodingProfile();
        _encodingProfile.Container.Subtype = "MPEG4";
        _encodingProfile.Video.Subtype = "H264";
        _encodingProfile.Video.Width = width;
        _encodingProfile.Video.Height = height;
        _encodingProfile.Video.Bitrate = bitrate;
        _encodingProfile.Video.FrameRate.Numerator = framerate;
        _encodingProfile.Video.FrameRate.Denominator = 1;
        _encodingProfile.Video.PixelAspectRatio.Numerator = 1;
        _encodingProfile.Video.PixelAspectRatio.Denominator = 1;

        var videoProperties = VideoEncodingProperties.CreateUncompressed(MediaEncodingSubtypes.Bgra8, width, height);
        _videoDescriptor = new VideoStreamDescriptor(videoProperties);

        // Create our MediaStreamSource
        _mediaStreamSource = new MediaStreamSource(_videoDescriptor);
        _mediaStreamSource.BufferTime = TimeSpan.FromSeconds(0);
        _mediaStreamSource.Starting += OnMediaStreamSourceStarting;
        _mediaStreamSource.SampleRequested += OnMediaStreamSourceSampleRequested;

        // Create our transcoder
        _transcoder = new MediaTranscoder();
        _transcoder.HardwareAccelerationEnabled = true;


        // Create a destination file - Access to the VideosLibrary requires the "Videos Library" capability
        var folder = KnownFolders.VideosLibrary;
        var name = DateTime.Now.ToString("yyyyMMdd-HHmm-ss");
        var file = await folder.CreateFileAsync($"{name}.mp4");
        
        using (var stream = await file.OpenAsync(FileAccessMode.ReadWrite))

        await EncodeAsync(stream);
        
    }
    catch (Exception ex)
    {
        
        return;
    }
}

開始編碼Start encoding

現在主要物件已經初始化, EncodeAsync 方法會實作為開始進行捕捉作業。Now that the main objects have been initialized the EncodeAsync method is implemented to kick off the capture operation. 這個方法會先檢查以確定我們尚未錄製,如果不是,則會呼叫 helper 方法 StartCapture 來開始從畫面中捕獲畫面格。This method first checks to make sure we aren't already recording, and if not, it calls the helper method StartCapture to begin capturing frames from the screen. 本文稍後會顯示這個方法。This method is shown later in this article. 接下來,使用我們在上一節中建立的編碼設定檔,呼叫 PrepareMediaStreamSourceTranscodeAsync 來取得 MediaTranscoder 準備好將 >mediastreamsource 物件所產生的影片串流轉碼至輸出檔案資料流程。Next, PrepareMediaStreamSourceTranscodeAsync is called to get the MediaTranscoder ready to transcode the video stream produced by the MediaStreamSource object to the output file stream, using the encoding profile we created in the previous section. 備妥轉碼程式之後,請呼叫 TranscodeAsync 以開始轉碼。Once the transcoder has been prepared, call TranscodeAsync to start transcoding. 如需有關使用 MediaTranscoder的詳細資訊,請參閱 轉碼媒體檔案。For more information on using the MediaTranscoder, see Transcode media files.


private async Task EncodeAsync(IRandomAccessStream stream)
{
    if (!_isRecording)
    {
        _isRecording = true;

        StartCapture();

        var transcode = await _transcoder.PrepareMediaStreamSourceTranscodeAsync(_mediaStreamSource, stream, _encodingProfile);

        await transcode.TranscodeAsync();
    }
}

處理 >mediastreamsource 事件Handle MediaStreamSource events

>mediastreamsource物件會採用我們從畫面捕捉的畫面格,並將其轉換成可使用MediaTranscoder儲存至檔案的影片串流。The MediaStreamSource object takes frames that we capture from the screen and transforms them into a video stream that can be saved to a file using the MediaTranscoder. 我們會透過物件事件的處理常式,將畫面格傳遞給 >mediastreamsourceWe pass the frames to the MediaStreamSource via handlers for the object's events.

>mediastreamsource準備好新的影片框架時,就會引發SampleRequested事件。The SampleRequested event is raised when the MediaStreamSource is ready for a new video frame. 確定目前正在錄製之後,就會呼叫 helper 方法 WaitForNewFrame 來取得從畫面中捕捉到的新框架。After making sure we are currently recording, the helper method WaitForNewFrame is called to get a new frame captured from the screen. 本文章稍後所示的這個方法會傳回包含已捕捉框架的 ID3D11Surface 物件。This method, shown later in this article, returns a ID3D11Surface object containing the captured frame. 在此範例中,我們會將 IDirect3DSurface 介面包裝在協助程式類別中,此類別也會儲存捕獲框架的系統時間。For this example, we wrap the IDirect3DSurface interface in a helper class that also stores the system time at which the frame was captured. 畫面格和系統時間都會傳遞至MediaStreamSample. CreateFromDirect3D11Surface factory 方法,而產生的MediaStreamSample會設定為MediaStreamSourceSampleRequestedEventArgsMediaStreamSourceSampleRequest 範例屬性。Both the frame and the system time are passed into the MediaStreamSample.CreateFromDirect3D11Surface factory method and the resulting MediaStreamSample is set to the MediaStreamSourceSampleRequest.Sample property of the MediaStreamSourceSampleRequestedEventArgs. 這是提供給 >mediastreamsource的捕獲框架的方式。This is how the captured frame is provided to the MediaStreamSource.

private void OnMediaStreamSourceSampleRequested(MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args)
{
    if (_isRecording && !_closed)
    {
        try
        {
            using (var frame = WaitForNewFrame())
            {
                if (frame == null)
                {
                    args.Request.Sample = null;
                    Dispose();
                    return;
                }

                var timeStamp = frame.SystemRelativeTime;

                var sample = MediaStreamSample.CreateFromDirect3D11Surface(frame.Surface, timeStamp);
                args.Request.Sample = sample;
            }
        }
        catch (Exception e)
        {
            Debug.WriteLine(e.Message);
            Debug.WriteLine(e.StackTrace);
            Debug.WriteLine(e);
            args.Request.Sample = null;
            Stop();
            Cleanup();
        }
    }
    else
    {
        args.Request.Sample = null;
        Stop();
        Cleanup();
    }
}

開始 事件的處理常式中,我們會呼叫 WaitForNewFrame,但是只會將畫面格的系統時間傳遞給 MediaStreamSourceStartingRequest. SetActualStartPosition 方法,而 >mediastreamsource 會使用該方法來適當地編碼後續框架的時間。In the handler for the Starting event, we call WaitForNewFrame, but only pass the system time the frame was captured to the MediaStreamSourceStartingRequest.SetActualStartPosition method, which the MediaStreamSource uses to properly encode the timing of the subsequent frames.

private void OnMediaStreamSourceStarting(MediaStreamSource sender, MediaStreamSourceStartingEventArgs args)
{
    using (var frame = WaitForNewFrame())
    {
        args.Request.SetActualStartPosition(frame.SystemRelativeTime);
    }
}

開始捕獲Start capturing

此步驟中所顯示的 StartCapture 方法是從上一個步驟中所示的 EncodeAsync helper 方法來呼叫。The StartCapture method shown in this step is called from the EncodeAsync helper method shown in a previous step. 首先,這個方法會初始化一組用來控制捕捉作業流程的事件物件。First, this method initializes up a set of event objects that are used to control the flow of the capture operation.

  • _multithread 是包裝 SharpDX 程式庫之多 執行緒 物件的 helper 類別,用來確保在複製時,不會有其他執行緒存取 SharpDX 材質。_multithread is a helper class wrapping the SharpDX library's Multithread object that will be used to make sure that no other threads access the SharpDX texture while it's being copied.
  • _frameEvent 用來表示已捕捉到新的框架,並可傳遞至 >mediastreamsource_frameEvent is used to signal that a new frame has been captured and can be passed to the MediaStreamSource
  • _closedEvent 表示記錄已停止,而且不應該等待任何新的框架。_closedEvent signals that recording has stopped and that we shouldn't wait for any new frames.

Frame 事件和 closed 事件會新增至陣列,因此我們可以在 capture 迴圈中等候其中一個。The frame event and closed event are added to an array so we can wait for either one of them in the capture loop.

StartCapture方法的其餘部分會設定將會進行實際螢幕捕捉的 Windows. 圖形。The rest of the StartCapture method sets up the Windows.Graphics.Capture APIs that will do the actual screen capturing. 首先,會註冊 CaptureItem 的事件。First, an event is registered for the CaptureItem.Closed event. 接下來會建立 Direct3D11CaptureFramePool ,這可讓您一次緩衝處理多個已捕獲的框架。Next, a Direct3D11CaptureFramePool is created, which allows multiple captured frames to be buffered at a time. CreateFreeThreaded方法是用來建立框架組區,以便在集區自己的背景工作執行緒上呼叫FrameArrived事件,而不是在應用程式的主執行緒上呼叫。The CreateFreeThreaded method is used to create the frame pool so that the FrameArrived event is called on the pool's own worker thread rather than on the app's main thread. 接著,會為 FrameArrived 事件註冊處理常式。Next, a handler is registered for the FrameArrived event. 最後,會為選取的CaptureItem建立GraphicsCaptureSession ,並藉由呼叫StartCapture起始框架的捕獲。Finally, a GraphicsCaptureSession is created for the selected CaptureItem and the capture of frames is initiated by calling StartCapture.

public void StartCapture()
{

    _multithread = _sharpDxD3dDevice.QueryInterface<SharpDX.Direct3D11.Multithread>();
    _multithread.SetMultithreadProtected(true);
    _frameEvent = new ManualResetEvent(false);
    _closedEvent = new ManualResetEvent(false);
    _events = new[] { _closedEvent, _frameEvent };

    _captureItem.Closed += OnClosed;
    _framePool = Direct3D11CaptureFramePool.CreateFreeThreaded(
        _device,
        DirectXPixelFormat.B8G8R8A8UIntNormalized,
        1,
        _captureItem.Size);
    _framePool.FrameArrived += OnFrameArrived;
    _session = _framePool.CreateCaptureSession(_captureItem);
    _session.StartCapture();
}

處理圖形捕獲事件Handle graphics capture events

在上一個步驟中,我們為圖形捕獲事件註冊了兩個處理常式,並設定一些事件來協助管理捕捉迴圈的流程。In the previous step we registered two handlers for graphics capture events and set up some events to help manage the flow of the capture loop.

Direct3D11CaptureFramePool有新的已捕捉畫面格可用時,就會引發FrameArrived事件。The FrameArrived event is raised when the Direct3D11CaptureFramePool has a new captured frame available. 在這個事件的處理常式中,呼叫寄件者上的 TryGetNextFrame 以取得下一個捕捉的框架。In the handler for this event, call TryGetNextFrame on the sender to get the next captured frame. 在抓取畫面格之後,我們會設定 _frameEvent ,讓我們的 capture 迴圈知道有新的畫面格可用。After the frame is retrieved, we set the _frameEvent so that our capture loop knows there is a new frame available.

private void OnFrameArrived(Direct3D11CaptureFramePool sender, object args)
{
    _currentFrame = sender.TryGetNextFrame();
    _frameEvent.Set();
}

關閉 的事件處理常式中,我們會通知 _closedEvent ,讓 capture 迴圈知道何時停止。In the Closed event handler, we signal the _closedEvent so that the capture loop will know when to stop.

private void OnClosed(GraphicsCaptureItem sender, object args)
{
    _closedEvent.Set();
}

等候新的框架Wait for new frames

本節中所述的 WaitForNewFrame helper 方法,是在何處進行捕捉迴圈的繁重工作。The WaitForNewFrame helper method described in this section is where the heavy-lifting of the capture loop occurs. 請記住,每當 >mediastreamsource準備好要將新的框架新增至影片串流時,就會從OnMediaStreamSourceSampleRequested事件處理常式呼叫這個方法。Remember, this method is called from the OnMediaStreamSourceSampleRequested event handler whenever the MediaStreamSource is ready for a new frame to be added to the video stream. 在高階中,此函式只會將每個螢幕上的影片畫面從一個 Direct3D 介面複製到另一個,以便在新的框架被捕獲時,將其傳遞至 >mediastreamsource 進行編碼。At a high-level, this function simply copies each screen-captured video frame from one Direct3D surface to another so that it can be passed into the MediaStreamSource for encoding while a new frame is being captured. 此範例會使用 SharpDX 程式庫來執行實際的複製作業。This example uses the SharpDX library to perform the actual copy operation.

在等候新的框架之前,方法會處置儲存于類別變數中的任何先前的框架, _currentFrame,然後重設 _frameEventBefore waiting for a new frame, the method disposes of any previous frame stored in the class variable, _currentFrame, and resets the _frameEvent. 然後,方法會等候 _frameEvent_closedEvent 發出信號。Then the method waits for either the _frameEvent or the _closedEvent to be signaled. 如果已設定關閉的事件,則應用程式會呼叫 helper 方法來清除捕獲資源。If the closed event is set, then the app calls a helper method to cleanup the capture resources. 本文稍後會顯示這個方法。This method is shown later in this article.

如果已設定 frame 事件,則我們知道先前步驟中所定義的 FrameArrived 事件處理常式已被呼叫,而我們開始將已捕捉的框架資料複製到將會傳遞至 >mediastreamsource的 Direct3D 11 介面中。If the frame event is set, then we know that the FrameArrived event handler defined in the previous step has been called, and we begin the process of copying the captured frame data into a Direct3D 11 surface that will be passed to the MediaStreamSource.

此範例會使用協助程式類別 SurfaceWithInfo,這只是讓我們將畫面格所需的影片畫面和系統時間傳遞給 >mediastreamsource ,作為單一物件。This example uses a helper class, SurfaceWithInfo, which simply allows us to pass the video frame and the system time of the frame - both required by the MediaStreamSource - as a single object. 畫面格複製程式的第一個步驟是將這個類別具現化,並設定系統時間。The first step of the frame copy process is to instantiate this class and set the system time.

接下來的步驟是此範例中特別依賴 SharpDX 程式庫的部分。The next steps are the part of this example that relies specifically on the SharpDX library. 這裡使用的 helper 函式是在本文結尾定義。The helper functions used here are defined at the end of this article. 首先,我們會使用 MultiThreadLock 來確保在進行複製時,不會有其他執行緒存取影片框架緩衝區。First we use the MultiThreadLock to make sure no other threads access the video frame buffer while we are making the copy. 接下來,我們會呼叫 helper 方法 CreateSharpDXTexture2D ,從影片框架建立 SharpDX Texture2D 物件。Next, we call the helper method CreateSharpDXTexture2D to create a SharpDX Texture2D object from the video frame. 這會是複製作業的來源紋理。This will be the source texture for the copy operation.

接下來,我們會從上一個步驟中建立的 Texture2D 物件複製到我們稍早在流程中建立的組合材質。Next, we copy from the Texture2D object created in the previous step into the composition texture we created earlier in the process. 此組合材質可作為交換緩衝區,讓編碼程式可以在下一個畫面格被捕捉時,以圖元為單位來運作。This composition texture acts as a swap buffer so that the encoding process can operate on the pixels while the next frame is being captured. 為了執行複製,我們會清除與組合材質相關聯的轉譯目標視圖,然後在要複製的材質中定義區域(在此案例中為整個材質),然後呼叫 CopySubresourceRegion ,以實際將圖元複製到組合材質。To perform the copy, we clear the render target view associated with the composition texture, then we define the region within the texture we want to copy - the entire texture in this case, and then we call CopySubresourceRegion to actually copy the pixels to the composition texture.

我們會建立一份材質描述的複本,以便在我們建立目標材質時使用,但會修改描述,將 BindFlags 設定為 RenderTarget ,讓新的材質具有寫入存取權。We create a copy of the texture description to use when we create our target texture, but the description is modified, setting the BindFlags to RenderTarget so that the new texture has write access. CpuAccessFlags 設定為「 」可讓系統優化複製操作。Setting the CpuAccessFlags to None allows the system to optimize the copy operation. 材質描述是用來建立新的材質資源,而複合材質資源會透過呼叫 CopyResource複製到這個新的資源。The texture description is used to create a new texture resource and the composition texture resource is copied into this new resource with a call to CopyResource. 最後,呼叫 CreateDirect3DSurfaceFromSharpDXTexture 來建立從這個方法傳回的 IDirect3DSurface 物件。Finally, CreateDirect3DSurfaceFromSharpDXTexture is called to create the IDirect3DSurface object that is returned from this method.

public SurfaceWithInfo WaitForNewFrame()
{
    // Let's get a fresh one.
    _currentFrame?.Dispose();
    _frameEvent.Reset();

    var signaledEvent = _events[WaitHandle.WaitAny(_events)];
    if (signaledEvent == _closedEvent)
    {
        Cleanup();
        return null;
    }

    var result = new SurfaceWithInfo();
    result.SystemRelativeTime = _currentFrame.SystemRelativeTime;
    using (var multithreadLock = new MultithreadLock(_multithread))
    using (var sourceTexture = Direct3D11Helpers.CreateSharpDXTexture2D(_currentFrame.Surface))
    {

        _sharpDxD3dDevice.ImmediateContext.ClearRenderTargetView(_composeRenderTargetView, new SharpDX.Mathematics.Interop.RawColor4(0, 0, 0, 1));

        var width = Math.Clamp(_currentFrame.ContentSize.Width, 0, _currentFrame.Surface.Description.Width);
        var height = Math.Clamp(_currentFrame.ContentSize.Height, 0, _currentFrame.Surface.Description.Height);
        var region = new SharpDX.Direct3D11.ResourceRegion(0, 0, 0, width, height, 1);
        _sharpDxD3dDevice.ImmediateContext.CopySubresourceRegion(sourceTexture, 0, region, _composeTexture, 0);

        var description = sourceTexture.Description;
        description.Usage = SharpDX.Direct3D11.ResourceUsage.Default;
        description.BindFlags = SharpDX.Direct3D11.BindFlags.ShaderResource | SharpDX.Direct3D11.BindFlags.RenderTarget;
        description.CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None;
        description.OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None;

        using (var copyTexture = new SharpDX.Direct3D11.Texture2D(_sharpDxD3dDevice, description))
        {
            _sharpDxD3dDevice.ImmediateContext.CopyResource(_composeTexture, copyTexture);
            result.Surface = Direct3D11Helpers.CreateDirect3DSurfaceFromSharpDXTexture(copyTexture);
        }
    }

    return result;
}

停止捕獲和清除資源Stop capture and clean up resources

Stop方法提供停止捕捉作業的方法。The Stop method provides a way to stop the capture operation. 您的應用程式可能會以程式設計方式呼叫此程式,或回應使用者的互動,例如按一下按鈕。Your app may call this programmatically or in response to a user interaction, like a button click. 這個方法只會設定 _closedEventThis method simply sets the _closedEvent. 在先前的步驟中定義的 WaitForNewFrame 方法會尋找這個事件,如果有設定,則會關閉捕捉作業。The WaitForNewFrame method defined in the previous steps looks for this event and, if set, shuts down the capture operation.

private void Stop()
{
    _closedEvent.Set();
}

清除方法可用來適當處置複製作業期間所建立的資源。The Cleanup method is used to properly dispose of the resources that were created during the copy operation. 這包括:This includes:

  • Capture 會話所使用的 Direct3D11CaptureFramePool 物件The Direct3D11CaptureFramePool object used by the capture session
  • GraphicsCaptureSessionGraphicsCaptureItemThe GraphicsCaptureSession and GraphicsCaptureItem
  • Direct3D 和 SharpDX 裝置The Direct3D and SharpDX devices
  • 複製作業中所使用的 SharpDX 材質和轉譯目標視圖。The SharpDX texture and render target view used in the copy operation.
  • 用於儲存目前框架的 Direct3D11CaptureFrameThe Direct3D11CaptureFrame used for storing the current frame.
private void Cleanup()
{
    _framePool?.Dispose();
    _session?.Dispose();
    if (_captureItem != null)
    {
        _captureItem.Closed -= OnClosed;
    }
    _captureItem = null;
    _device = null;
    _sharpDxD3dDevice = null;
    _composeTexture?.Dispose();
    _composeTexture = null;
    _composeRenderTargetView?.Dispose();
    _composeRenderTargetView = null;
    _currentFrame?.Dispose();
}

Helper 包裝函式類別Helper wrapper classes

下列 helper 類別的定義,是為了協助進行本文中的範例程式碼。The following helper classes were defined to help with the example code in this article.

MultithreadLock helper 類別會包裝 SharpDX 多執行緒類別,以確保在複製時,其他執行緒不會存取材質資源。The MultithreadLock helper class wraps the SharpDX Multithread class that makes sure that other threads don't access the texture resources while being copied.

class MultithreadLock : IDisposable
{
    public MultithreadLock(SharpDX.Direct3D11.Multithread multithread)
    {
        _multithread = multithread;
        _multithread?.Enter();
    }

    public void Dispose()
    {
        _multithread?.Leave();
        _multithread = null;
    }

    private SharpDX.Direct3D11.Multithread _multithread;
}

SurfaceWithInfo 是用來將 IDirect3DSurface 與代表已捕捉框架的 SystemRelativeTime 以及個別捕獲的時間產生關聯。SurfaceWithInfo is used to associate an IDirect3DSurface with a SystemRelativeTime representing the a captured frame and the time it was captured, respectively.

public sealed class SurfaceWithInfo : IDisposable
{
    public IDirect3DSurface Surface { get; internal set; }
    public TimeSpan SystemRelativeTime { get; internal set; }

    public void Dispose()
    {
        Surface?.Dispose();
        Surface = null;
    }
}

Direct3D 和 SharpDX helper ApiDirect3D and SharpDX helper APIs

下列協助程式 Api 的定義是為了抽象出 Direct3D 和 SharpDX 資源的建立。The following helper APIs are defined to abstract out the creation of Direct3D and SharpDX resources. 這些技術的詳細說明不在本文的討論範圍內,但此處提供的程式碼可讓您執行逐步解說中所示的範例程式碼。A detailed explanation of these technologies is outside the scope of this article but the code is provided here to allow you to implement the example code shown in the walkthrough.

[ComImport]
[Guid("A9B3D012-3DF2-4EE3-B8D1-8695F457D3C1")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
[ComVisible(true)]
interface IDirect3DDxgiInterfaceAccess
{
    IntPtr GetInterface([In] ref Guid iid);
};

public static class Direct3D11Helpers
{
    internal static Guid IInspectable = new Guid("AF86E2E0-B12D-4c6a-9C5A-D7AA65101E90");
    internal static Guid ID3D11Resource = new Guid("dc8e63f3-d12b-4952-b47b-5e45026a862d");
    internal static Guid IDXGIAdapter3 = new Guid("645967A4-1392-4310-A798-8053CE3E93FD");
    internal static Guid ID3D11Device = new Guid("db6f6ddb-ac77-4e88-8253-819df9bbf140");
    internal static Guid ID3D11Texture2D = new Guid("6f15aaf2-d208-4e89-9ab4-489535d34f9c");

    [DllImport(
        "d3d11.dll",
        EntryPoint = "CreateDirect3D11DeviceFromDXGIDevice",
        SetLastError = true,
        CharSet = CharSet.Unicode,
        ExactSpelling = true,
        CallingConvention = CallingConvention.StdCall
        )]
    internal static extern UInt32 CreateDirect3D11DeviceFromDXGIDevice(IntPtr dxgiDevice, out IntPtr graphicsDevice);

    [DllImport(
        "d3d11.dll",
        EntryPoint = "CreateDirect3D11SurfaceFromDXGISurface",
        SetLastError = true,
        CharSet = CharSet.Unicode,
        ExactSpelling = true,
        CallingConvention = CallingConvention.StdCall
        )]
    internal static extern UInt32 CreateDirect3D11SurfaceFromDXGISurface(IntPtr dxgiSurface, out IntPtr graphicsSurface);

    public static IDirect3DDevice CreateD3DDevice()
    {
        return CreateD3DDevice(false);
    }

    public static IDirect3DDevice CreateD3DDevice(bool useWARP)
    {
        var d3dDevice = new SharpDX.Direct3D11.Device(
            useWARP ? SharpDX.Direct3D.DriverType.Software : SharpDX.Direct3D.DriverType.Hardware,
            SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport);
        IDirect3DDevice device = null;

        // Acquire the DXGI interface for the Direct3D device.
        using (var dxgiDevice = d3dDevice.QueryInterface<SharpDX.DXGI.Device3>())
        {
            // Wrap the native device using a WinRT interop object.
            uint hr = CreateDirect3D11DeviceFromDXGIDevice(dxgiDevice.NativePointer, out IntPtr pUnknown);

            if (hr == 0)
            {
                device = Marshal.GetObjectForIUnknown(pUnknown) as IDirect3DDevice;
                Marshal.Release(pUnknown);
            }
        }

        return device;
    }


    internal static IDirect3DSurface CreateDirect3DSurfaceFromSharpDXTexture(SharpDX.Direct3D11.Texture2D texture)
    {
        IDirect3DSurface surface = null;

        // Acquire the DXGI interface for the Direct3D surface.
        using (var dxgiSurface = texture.QueryInterface<SharpDX.DXGI.Surface>())
        {
            // Wrap the native device using a WinRT interop object.
            uint hr = CreateDirect3D11SurfaceFromDXGISurface(dxgiSurface.NativePointer, out IntPtr pUnknown);

            if (hr == 0)
            {
                surface = Marshal.GetObjectForIUnknown(pUnknown) as IDirect3DSurface;
                Marshal.Release(pUnknown);
            }
        }

        return surface;
    }



    internal static SharpDX.Direct3D11.Device CreateSharpDXDevice(IDirect3DDevice device)
    {
        var access = (IDirect3DDxgiInterfaceAccess)device;
        var d3dPointer = access.GetInterface(ID3D11Device);
        var d3dDevice = new SharpDX.Direct3D11.Device(d3dPointer);
        return d3dDevice;
    }

    internal static SharpDX.Direct3D11.Texture2D CreateSharpDXTexture2D(IDirect3DSurface surface)
    {
        var access = (IDirect3DDxgiInterfaceAccess)surface;
        var d3dPointer = access.GetInterface(ID3D11Texture2D);
        var d3dSurface = new SharpDX.Direct3D11.Texture2D(d3dPointer);
        return d3dSurface;
    }


    public static SharpDX.Direct3D11.Texture2D InitializeComposeTexture(
        SharpDX.Direct3D11.Device sharpDxD3dDevice,
        SizeInt32 size)
    {
        var description = new SharpDX.Direct3D11.Texture2DDescription
        {
            Width = size.Width,
            Height = size.Height,
            MipLevels = 1,
            ArraySize = 1,
            Format = SharpDX.DXGI.Format.B8G8R8A8_UNorm,
            SampleDescription = new SharpDX.DXGI.SampleDescription()
            {
                Count = 1,
                Quality = 0
            },
            Usage = SharpDX.Direct3D11.ResourceUsage.Default,
            BindFlags = SharpDX.Direct3D11.BindFlags.ShaderResource | SharpDX.Direct3D11.BindFlags.RenderTarget,
            CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None,
            OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None
        };
        var composeTexture = new SharpDX.Direct3D11.Texture2D(sharpDxD3dDevice, description);
       

        using (var renderTargetView = new SharpDX.Direct3D11.RenderTargetView(sharpDxD3dDevice, composeTexture))
        {
            sharpDxD3dDevice.ImmediateContext.ClearRenderTargetView(renderTargetView, new SharpDX.Mathematics.Interop.RawColor4(0, 0, 0, 1));
        }

        return composeTexture;
    }
}

請參閱See also