自訂視訊效果Custom video effects

本文明如何建立能實作 IBasicVideoEffect 介面以為視訊串流建立自訂效果的 Windows 執行階段元件。This article describes how to create a Windows Runtime component that implements the IBasicVideoEffect interface to create custom effects for video streams. 自訂效果可以搭配數個不同的 Windows 執行階段 API 使用,其中包括能提供裝置相機存取的 MediaCapture,以及能允許您由媒體剪輯建立複雜組合的 MediaCompositionCustom effects can be used with several different Windows Runtime APIs including MediaCapture, which provides access to a device's camera, and MediaComposition, which allows you to create complex compositions out of media clips.

將自訂效果新增到您的 AppAdd a custom effect to your app

自訂視訊效果是在實作 IBasicVideoEffect 介面之類別中定義。A custom video effect is defined in a class that implements the IBasicVideoEffect interface. 此類別不能直接包含在您 App 的專案中。This class can't be included directly in your app's project. 您必須改為使用 Windows 執行階段元件來裝載您的視訊效果類別。Instead, you must use a Windows Runtime component to host your video effect class.

為您的視訊效果新增 Windows 執行階段元件Add a Windows Runtime component for your video effect

  1. 在 Microsoft Visual Studio 中,開啟您的方案,移至 [檔案 ] 功能表,然後選取 [ 加入 > 新專案]。In Microsoft Visual Studio, with your solution open, go to the File menu and select Add->New Project.
  2. 選取 [ **通用 Windows) ] 專案類型 (Windows 執行階段元件 ** 。Select the Windows Runtime Component (Universal Windows) project type.
  3. 在此範例中,請將專案命名為 VideoEffectComponentFor this example, name the project VideoEffectComponent. 此名稱將會由稍後的程式碼所參考。This name will be referenced in code later.
  4. 按一下 [確定] 。Click OK.
  5. 專案範本會建立名為 Class1.cs 的類別。The project template creates a class called Class1.cs. 方案總管中,以滑鼠右鍵按一下 Class1.cs 的圖示,然後選取 [ 重新命名]。In Solution Explorer, right-click the icon for Class1.cs and select Rename.
  6. 將檔案重新命名為 ExampleVideoEffect.csRename the file to ExampleVideoEffect.cs. Visual Studio 將會顯示提示,詢問您是否要將所有參照更新為新的名稱。Visual Studio will show a prompt asking if you want to update all references to the new name. 按一下 [是] 。Click Yes.
  7. 開啟 ExampleVideoEffect.cs ,並更新類別定義以執行 IBasicVideoEffect 介面。Open ExampleVideoEffect.cs and update the class definition to implement the IBasicVideoEffect interface.
public sealed class ExampleVideoEffect : IBasicVideoEffect

您必須在您的效果類別檔案中包含下列命名空間,以存取本文章的範例中所使用的所有類型。You need to include the following namespaces in your effect class file in order to access all of the types used in the examples in this article.

using Windows.Media.Effects;
using Windows.Media.MediaProperties;
using Windows.Foundation.Collections;
using Windows.Graphics.DirectX.Direct3D11;
using Windows.Graphics.Imaging;

使用軟體處理實作 IBasicVideoEffect 介面Implement the IBasicVideoEffect interface using software processing

您的視訊效果必須實作 IBasicVideoEffect 介面的所有方法和屬性。Your video effect must implement all of the methods and properties of the IBasicVideoEffect interface. 本節將引導您完成此介面使用軟體處理的簡單實作。This section walks you through a simple implementation of this interface that uses software processing.

Close 方法Close method

系統將會在效果應該關閉時,呼叫您類別上的 Close 方法。The system will call the Close method on your class when the effect should shut down. 您應該使用此方法來處置您已建立的任何資源。You should use this method to dispose of any resources you have created. 方法的引數為 MediaEffectClosedReason,可讓您知道效果是否正常關閉、是否有發生錯誤,或是效果是否不支援所需的編碼格式。The argument to the method is a MediaEffectClosedReason that lets you know whether the effect was closed normally, if an error occurred, or if the effect does not support the required encoding format.

public void Close(MediaEffectClosedReason reason)
{
    // Dispose of effect resources
}

DiscardQueuedFrames 方法DiscardQueuedFrames method

在您的效果應該重設時,便會呼叫 DiscardQueuedFrames 方法。The DiscardQueuedFrames method is called when your effect should reset. 此情況的其中一個典型案例是您的效果會儲存先前已處理的畫面,以用於處理目前的畫面之上。A typical scenario for this is if your effect stores previously processed frames to use in processing the current frame. 呼叫此方法時,您應該處置先前已儲存的畫面組合。When this method is called, you should dispose of the set of previous frames you saved. 除了針對累計的視訊畫面之外,此方法也可以用來重設任何與先前畫面相關的狀態。This method can be used to reset any state related to previous frames, not only accumulated video frames.

private int frameCount;
public void DiscardQueuedFrames()
{
    frameCount = 0;
}

IsReadOnly 屬性IsReadOnly property

IsReadOnly 屬性能讓系統知道您的效果是否會寫入至效果的輸出。The IsReadOnly property lets the system know if your effect will write to the output of the effect. 如果您的 App 並不會修改視訊畫面 (例如僅針對視訊畫面執行分析的效果),您應該將此屬性設定為 true,這將能使系統有效率地為您將畫面輸入複製到畫面輸出。If your app does not modify the video frames (for example, an effect that only performs analysis of the video frames), you should set this property to true, which will cause the system to efficiently copy the frame input to the frame output for you.

提示

IsReadOnly 屬性設定為 true 時,系統會在呼叫 ProcessFrame 之前將輸入畫面複製到輸出畫面。When the IsReadOnly property is set to true, the system copies the input frame to the output frame before ProcessFrame is called. IsReadOnly 屬性設定為 true 並不會限制您在 ProcessFrame 中寫入至效果的輸出畫面。Setting the IsReadOnly property to true does not restrict you from writing to the effect's output frames in ProcessFrame.

public bool IsReadOnly { get { return false; } }

SetEncodingProperties 方法SetEncodingProperties method

系統會在您的效果上呼叫 SetEncodingProperties 來讓您知道效果正在運作之視訊串流的編碼屬性。The system calls SetEncodingProperties on your effect to let you know the encoding properties for the video stream upon which the effect is operating. 此方法也能夠提供用於硬體轉譯之 Direct3D 裝置的參照。This method also provides a reference to the Direct3D device used for hardware rendering. 此裝置的使用方式將會在本文章稍後的硬體處理範例中提供。The usage of this device is shown in the hardware processing example later in this article.

private VideoEncodingProperties encodingProperties;
public void SetEncodingProperties(VideoEncodingProperties encodingProperties, IDirect3DDevice device)
{
    this.encodingProperties = encodingProperties;
}

SupportedEncodingProperties 屬性SupportedEncodingProperties property

系統將會檢查 SupportedEncodingProperties 屬性來判斷您效果所支援的編碼屬性。The system checks the SupportedEncodingProperties property to determine which encoding properties are supported by your effect. 請注意,如果您效果的取用者無法使用您所指定的屬性來編碼視訊,它將會在您的效果上呼叫 Close 並將您的效果從視訊管線中移除。Note that if the consumer of your effect can't encode video using the properties you specify, it will call Close on your effect and will remove your effect from the video pipeline.

public IReadOnlyList<VideoEncodingProperties> SupportedEncodingProperties
{            
    get
    {
        var encodingProperties = new VideoEncodingProperties();
        encodingProperties.Subtype = "ARGB32";
        return new List<VideoEncodingProperties>() { encodingProperties };

        // If the list is empty, the encoding type will be ARGB32.
        // return new List<VideoEncodingProperties>();
    }
}

注意

如果您從 SupportedEncodingProperties 傳回 VideoEncodingProperties 物件的空白清單,系統預設將會使用 ARGB32 編碼。If you return an empty list of VideoEncodingProperties objects from SupportedEncodingProperties, the system will default to ARGB32 encoding.

 

SupportedMemoryTypes 屬性SupportedMemoryTypes property

系統會檢查 SupportedMemoryTypes 屬性以判斷您的效果會存取軟體記憶體中的視訊畫面格,或是硬體 (GPU) 記憶體中的視訊畫面格。The system checks the SupportedMemoryTypes property to determine whether your effect will access video frames in software memory or in hardware (GPU) memory. 如果您回傳 MediaMemoryTypes.Cpu,您的效果將會被傳遞包含 SoftwareBitmap 物件中之影像資料的輸入和輸出畫面格。If you return MediaMemoryTypes.Cpu, your effect will be passed input and output frames that contain image data in SoftwareBitmap objects. 如果您回傳 MediaMemoryTypes.Gpu,您的效果將會被傳遞包含 IDirect3DSurface 物件中之影像資料的輸入和輸出畫面格。If you return MediaMemoryTypes.Gpu, your effect will be passed input and output frames that contain image data in IDirect3DSurface objects.

public MediaMemoryTypes SupportedMemoryTypes { get { return MediaMemoryTypes.Cpu; } }

注意

如果您指定 MediaMemoryTypes.GpuAndCpu,系統將會使用 GPU 或系統記憶體,視哪一項針對管線較有效率而定。If you specify MediaMemoryTypes.GpuAndCpu, the system will use either GPU or system memory, whichever is more efficient for the pipeline. 使用此值時,您必須檢查 ProcessFrame 方法,以查看傳遞至方法的 SoftwareBitmapIDirect3DSurface 是否包含資料,並據此處理畫面。When using this value, you must check in the ProcessFrame method to see whether the SoftwareBitmap or IDirect3DSurface passed into the method contains data, and then process the frame accordingly.

 

TimeIndependent 屬性TimeIndependent property

TimeIndependent 屬性能讓系統知道您的效果不需要統一計時。The TimeIndependent property lets the system know if your effect does not require uniform timing. 當設定為 true 時,系統將可以使用能增強效果效能的最佳化功能。When set to true, the system can use optimizations that enhance effect performance.

public bool TimeIndependent { get { return true; } }

SetProperties 方法SetProperties method

SetProperties 方法允許使用您效果的 App 調整效果參數。The SetProperties method allows the app that is using your effect to adjust effect parameters. 屬性將會以包含屬性名稱和值的 IPropertySet 對應進行傳遞。Properties are passed as an IPropertySet map of property names and values.

private IPropertySet configuration;
public void SetProperties(IPropertySet configuration)
{
    this.configuration = configuration;
}

此簡單範例會根據指定值使每個視訊畫面格中的像素變暗。This simple example will dim the pixels in each video frame according to a specified value. 將會宣告屬性,並使用 TryGetValue 來取得由呼叫 App 所設定的值。A property is declared and TryGetValue is used to get the value set by the calling app. 如果沒有設定任何值,將會使用 0.5 的預設值。If no value was set, a default value of .5 is used.

public double FadeValue
{
    get
    {
        object val;
        if (configuration != null && configuration.TryGetValue("FadeValue", out val))
        {
            return (double)val;
        }
        return .5;
    }
}

ProcessFrame 方法ProcessFrame method

ProcessFrame 方法可以讓您的效果修改視訊的影像資料。The ProcessFrame method is where your effect modifies the image data of the video. 此方法會於每個畫面格呼叫一次,並會被傳遞 ProcessVideoFrameContext 物件。The method is called once per frame and is passed a ProcessVideoFrameContext object. 此物件包含輸入 VideoFrame 物件 (該物件包含要處理的傳入畫面格),以及輸出 VideoFrame 物件 (您將會針對該物件寫入會傳遞至剩餘視訊管線的影像資料)。This object contains an input VideoFrame object that contains the incoming frame to be processed and an output VideoFrame object to which you write image data that will be passed on to rest of the video pipeline. 每個 VideoFrame 物件皆擁有 SoftwareBitmap 屬性及 Direct3DSurface 屬性,並由您從 SupportedMemoryTypes 屬性回傳的值來決定會使用上述哪一個參數。Each of these VideoFrame objects has a SoftwareBitmap property and a Direct3DSurface property, but which of these can be used is determined by the value you returned from the SupportedMemoryTypes property.

此範例顯示使用軟體處理之 ProcessFrame 方法的簡單實作。This example shows a simple implementation of the ProcessFrame method using software processing. 如需使用 SoftwareBitmap 物件的詳細資訊,請參閱影像處理For more information about working with SoftwareBitmap objects, see Imaging. 本文稍後將提供使用硬體處理之 ProcessFrame 實作的範例。An example ProcessFrame implementation using hardware processing is shown later in this article.

存取 SoftwareBitmap 的資料緩衝區需要 COM interop,因此您應該將 System.Runtime.InteropServices 命名空間包含在您的效果類別檔案中。Accessing the data buffer of a SoftwareBitmap requires COM interop, so you should include the System.Runtime.InteropServices namespace in your effect class file.

using System.Runtime.InteropServices;

將下列程式碼新增到您效果的命名空間內,以匯入存取影像緩衝區的介面。Add the following code inside the namespace for your effect to import the interface for accessing the image buffer.

[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
    void GetBuffer(out byte* buffer, out uint capacity);
}

注意

因為此技術會存取原生、未受管理的影像緩衝區,您必須將您的專案設定為允許不安全的程式碼。Because this technique accesses a native, unmanaged image buffer, you will need to configure your project to allow unsafe code.

  1. 在方案總管中,以滑鼠右鍵按一下 VideoEffectComponent 專案,然後選取 [ 屬性]。In Solution Explorer, right-click the VideoEffectComponent project and select Properties.
  2. 選取 [組建] **** 索引標籤。Select the Build tab.
  3. 選取 [ 允許 unsafe 程式碼 ] 核取方塊。Select the Allow unsafe code check box.

 

現在您可以新增 ProcessFrame 方法實作。Now you can add the ProcessFrame method implementation. 首先,此方法會同時從輸入和輸出軟體點陣圖取得 BitmapBuffer 物件。First, this method obtains a BitmapBuffer object from both the input and output software bitmaps. 請注意,輸出畫面格會針對寫入開啟,而輸入畫面格則會針對讀取開啟。Note that the output frame is opened for writing and the input for reading. 接下來,將會透過呼叫 CreateReference 來為每個緩衝區取得 IMemoryBufferReferenceNext, an IMemoryBufferReference is obtained for each buffer by calling CreateReference. 然後,透過將 IMemoryBufferReference 物件轉型為 IMemoryByteAccess (於上方定義的 COM interop 介面),然後呼叫 GetBuffer,來取得實際的資料緩衝區。Then, the actual data buffer is obtained by casting the IMemoryBufferReference objects as the COM interop interface defined above, IMemoryByteAccess, and then calling GetBuffer.

現在您已取得資料緩衝區,您可以從輸入緩衝區讀取,並寫入至輸出緩衝區。Now that the data buffers have been obtained, you can read from the input buffer and write to the output buffer. 緩衝區的配置是透過呼叫 GetPlaneDescription 取得,這將能提供緩衝區寬度、Stride 及初始位移的資訊。The layout of the buffer is obtained by calling GetPlaneDescription, which provides information on the width, stride, and initial offset of the buffer. 「每一像素位元數」是由先前透過 SetEncodingProperties 方法所設定的編碼屬性所決定。The bits per pixel is determined by the encoding properties set previously with the SetEncodingProperties method. 緩衝區格式資訊是用來找出每個像素之緩衝區的索引。The buffer format information is used to find the index into the buffer for each pixel. 來源緩衝區的像素值會被複製到目標緩衝區中,其色彩值會被乘以針對此效果定義的 FadeValue 屬性,來以指定的量將它們變暗。The pixel value from the source buffer is copied into the target buffer, with the color values being multiplied by the FadeValue property defined for this effect to dim them by the specified amount.

public unsafe void ProcessFrame(ProcessVideoFrameContext context)
{
    using (BitmapBuffer buffer = context.InputFrame.SoftwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))
    using (BitmapBuffer targetBuffer = context.OutputFrame.SoftwareBitmap.LockBuffer(BitmapBufferAccessMode.Write))
    {
        using (var reference = buffer.CreateReference())
        using (var targetReference = targetBuffer.CreateReference())
        {
            byte* dataInBytes;
            uint capacity;
            ((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacity);

            byte* targetDataInBytes;
            uint targetCapacity;
            ((IMemoryBufferByteAccess)targetReference).GetBuffer(out targetDataInBytes, out targetCapacity);

            var fadeValue = FadeValue;

            // Fill-in the BGRA plane
            BitmapPlaneDescription bufferLayout = buffer.GetPlaneDescription(0);
            for (int i = 0; i < bufferLayout.Height; i++)
            {
                for (int j = 0; j < bufferLayout.Width; j++)
                {

                    byte value = (byte)((float)j / bufferLayout.Width * 255);

                    int bytesPerPixel = 4; 
                    if (encodingProperties.Subtype != "ARGB32")
                    {
                        // If you support other encodings, adjust index into the buffer accordingly
                    }
                    

                    int idx = bufferLayout.StartIndex + bufferLayout.Stride * i + bytesPerPixel * j;

                    targetDataInBytes[idx + 0] = (byte)(fadeValue * (float)dataInBytes[idx + 0]);
                    targetDataInBytes[idx + 1] = (byte)(fadeValue * (float)dataInBytes[idx + 1]);
                    targetDataInBytes[idx + 2] = (byte)(fadeValue * (float)dataInBytes[idx + 2]);
                    targetDataInBytes[idx + 3] = dataInBytes[idx + 3];
                }
            }
        }
    }
}

使用硬體處理實作 IBasicVideoEffect 介面Implement the IBasicVideoEffect interface using hardware processing

使用硬體 (GPU) 處理建立自訂視訊效果,與上述使用軟體處理的方式幾乎相同。Creating a custom video effect by using hardware (GPU) processing is almost identical to using software processing as described above. 本節將說明使用硬體處理之效果的一些差異。This section will show the few differences in an effect that uses hardware processing. 此範例使用 Win2D Windows 執行階段 API。This example uses the Win2D Windows Runtime API. 如需使用 Win2D 的詳細資訊,請參閱 Win2D 文件For more information about using Win2D, see the Win2D documentation.

使用下列步驟,將 Win2D NuGet 套件新增到您依照本文開頭的將自訂效果新增到您的 App 一節建立的專案。Use the following steps to add the Win2D NuGet package to the project you created as described in the Add a custom effect to your app section at the beginning of this article.

將 Win2D NuGet 套件新增到您的效果專案To add the Win2D NuGet package to your effect project

  1. 方案總管中,以滑鼠右鍵按一下 VideoEffectComponent 專案,然後選取 [ 管理 NuGet 套件]。In Solution Explorer, right-click the VideoEffectComponent project and select Manage NuGet Packages.
  2. 在視窗頂端,選取 [ 流覽 ] 索引標籤。At the top of the window, select the Browse tab.
  3. 在搜尋方塊中輸入 Win2DIn the search box, enter Win2D.
  4. 選取 [ Win2D],然後在右窗格中選取 [ 安裝 ]。Select Win2D.uwp, and then select Install in the right pane.
  5. [ 評論變更 ] 對話方塊會顯示要安裝的套件。The Review Changes dialog shows you the package to be installed. 按一下 [確定] 。Click OK.
  6. 接受套件授權。Accept the package license.

除了包含在基本專案設定中的命名空間之外,您將需要包含下列由 Win2D 提供的命名空間。In addition to the namespaces included in the basic project setup, you will need to include the following namespaces provided by Win2D.

using Microsoft.Graphics.Canvas.Effects;
using Microsoft.Graphics.Canvas;

由於此效果會使用 GPU 記憶體來在影像資料上運作,您應該要從 SupportedMemoryTypes 參數回傳 MediaMemoryTypes.GpuBecause this effect will use GPU memory for operating on the image data, you should return MediaMemoryTypes.Gpu from the SupportedMemoryTypes property.

public MediaMemoryTypes SupportedMemoryTypes { get { return MediaMemoryTypes.Gpu; } }

使用 SupportedEncodingProperties 屬性來設定您效果會支援的編碼屬性。Set the encoding properties that your effect will support with the SupportedEncodingProperties property. 使用 Win2D 時,您必須使用 ARGB32 編碼。When working with Win2D, you must use ARGB32 encoding.

public IReadOnlyList<VideoEncodingProperties> SupportedEncodingProperties {
    get
    {
        var encodingProperties = new VideoEncodingProperties();
        encodingProperties.Subtype = "ARGB32";
        return new List<VideoEncodingProperties>() { encodingProperties };
    }
}

使用 SetEncodingProperties 方法來從傳遞到方法中的 IDirect3DDevice 建立新的 Win2D CanvasDevice 物件。Use the SetEncodingProperties method to create a new Win2D CanvasDevice object from the IDirect3DDevice passed into the method.

private CanvasDevice canvasDevice;
public void SetEncodingProperties(VideoEncodingProperties encodingProperties, IDirect3DDevice device)
{
    canvasDevice = CanvasDevice.CreateFromDirect3D11Device(device);
}

SetProperties 實作和先前的軟體處理範例相同。The SetProperties implementation is identical to the previous software processing example. 此範例使用 BlurAmount 參數來設定 Win2D 模糊效果。This example uses a BlurAmount property to configure a Win2D blur effect.

private IPropertySet configuration;
public void SetProperties(IPropertySet configuration)
{
    this.configuration = configuration;
}
public double BlurAmount
{
    get
    {
        object val;
        if (configuration != null && configuration.TryGetValue("BlurAmount", out val))
        {
            return (double)val;
        }
        return 3;
    }
}

最後一個步驟為實作實際處理影像資料的 ProcessFrame 方法。The last step is to implement the ProcessFrame method that actually processes the image data.

透過使用 Win2D API,將會從輸入畫面格的 Direct3DSurface 參數建立 CanvasBitmapUsing Win2D APIs, a CanvasBitmap is created from the input frame's Direct3DSurface property. CanvasRenderTarget 會從輸出畫面格的 Direct3DSurface 建立,而 CanvasDrawingSession 將會從此轉譯目標建立。A CanvasRenderTarget is created from the output frame's Direct3DSurface and a CanvasDrawingSession is created from this render target. 新的 Win2D GaussianBlurEffect 將會使用效果透過 SetProperties 公開的 BlurAmount 參數進行初始化。A new Win2D GaussianBlurEffect is initialized, using the BlurAmount property our effect exposes via SetProperties. 最後,將會呼叫 CanvasDrawingSession.DrawImage 方法來使用模糊效果將輸入點陣圖繪製到轉譯目標上。Finally, the CanvasDrawingSession.DrawImage method is called to draw the input bitmap to the render target using the blur effect.

public void ProcessFrame(ProcessVideoFrameContext context)
{

    using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, context.InputFrame.Direct3DSurface))
    using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, context.OutputFrame.Direct3DSurface))
    using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
    {


        var gaussianBlurEffect = new GaussianBlurEffect
        {
            Source = inputBitmap,
            BlurAmount = (float)BlurAmount,
            Optimization = EffectOptimization.Speed
        };

        ds.DrawImage(gaussianBlurEffect);

    }
}

將您的自訂效果新增到您的 AppAdding your custom effect to your app

如果要從您的 App 使用您的視訊效果,您必須將針對效果專案的參照新增到您的 App。To use your video effect from your app, you must add a reference to the effect project to your app.

  1. 在 [方案總管] 中,於您的專案下方,以滑鼠右鍵按一下 [參考],然後選取 [加入參考]In Solution Explorer, under your app project, right-click References and select Add reference.
  2. 展開 [專案] 索引標籤,選取 [方案],然後選取您效果專案名稱的核取方塊。Expand the Projects tab, select Solution, and then select the check box for your effect project name. 針對此範例,該名稱為 VideoEffectComponentFor this example, the name is VideoEffectComponent.
  3. 按一下 [確定] 。Click OK.

將您的自訂效果新增到相機視訊串流Add your custom effect to a camera video stream

您可以遵循簡單的相機預覽存取文章中的步驟,來從相機設定簡單的預覽串流。You can set up a simple preview stream from the camera by following the steps in the article Simple camera preview access. 遵循那些步驟將能提供您初始化的 MediaCapture 物件,該物件可以用來存取相機的視訊串流。Following those steps will provide you with an initialized MediaCapture object that is used to access the camera's video stream.

如果要將您的自訂視訊效果新增到相機串流,請先建立新的 VideoEffectDefinition 物件,並傳遞您效果的命名空間和類別名稱。To add your custom video effect to a camera stream, first create a new VideoEffectDefinition object, passing in the namespace and class name for your effect. 接著,請呼叫 MediaCapture 物件的 AddVideoEffect 方法,以將您的效果新增到指定的串流。Next, call the MediaCapture object's AddVideoEffect method to add your effect to the specified stream. 此範例使用 MediaStreamType.VideoPreview 值來指定要將該效果新增到預覽串流。This example uses the MediaStreamType.VideoPreview value to specify that the effect should be added to the preview stream. 如果您的 App 支援視訊擷取,您也應該使用 MediaStreamType.VideoRecord 來將該效果新增到擷取串流。If your app supports video capture, you could also use MediaStreamType.VideoRecord to add the effect to the capture stream. AddVideoEffect 會傳回代表您自訂效果的 IMediaExtension 物件。AddVideoEffect returns an IMediaExtension object representing your custom effect. 您可以使用 SetProperties 方法來為您的效果設定組態。You can use the SetProperties method to set the configuration for your effect.

新增效果之後,將會呼叫 StartPreviewAsync 以開始預覽串流。After the effect has been added, StartPreviewAsync is called to start the preview stream.

var videoEffectDefinition = new VideoEffectDefinition("VideoEffectComponent.ExampleVideoEffect");

IMediaExtension videoEffect =
   await mediaCapture.AddVideoEffectAsync(videoEffectDefinition, MediaStreamType.VideoPreview);

videoEffect.SetProperties(new PropertySet() { { "FadeValue", .25 } });

await mediaCapture.StartPreviewAsync();

將您的自訂效果新增到 MediaComposition 中的短片Add your custom effect to a clip in a MediaComposition

如需從視訊短片建立媒體組合的一般指導方針,請參閱媒體組合和編輯For general guidance for creating media compositions from video clips, see Media compositions and editing. 下列程式碼片段示範具有自訂視訊效果之簡單媒體組合的建立方式。The following code snippet shows the creation of a simple media composition that uses a custom video effect. 透過呼叫 CreateFromFileAsync 來建立 MediaClip 物件,傳遞使用者透過 FileOpenPicker 所選取的視訊檔案,該短片則新增到新的 MediaCompositionA MediaClip object is created by calling CreateFromFileAsync, passing in a video file that was selected by the user with a FileOpenPicker, and the clip is added to a new MediaComposition. 接著將會建立新的 VideoEffectDefinition 物件,並將您效果的命名空間和類別名稱傳遞給建構函式。Next a new VideoEffectDefinition object is created, passing in the namespace and class name for your effect to the constructor. 最後,將效果定義新增到 MediaClip 物件的 VideoEffectDefinitions 集合。Finally, the effect definition is added to the VideoEffectDefinitions collection of the MediaClip object.

MediaComposition composition = new MediaComposition();
var clip = await MediaClip.CreateFromFileAsync(pickedFile);
composition.Clips.Add(clip);

var videoEffectDefinition = new VideoEffectDefinition("VideoEffectComponent.ExampleVideoEffect", new PropertySet() { { "FadeValue", .5 } });

clip.VideoEffectDefinitions.Add(videoEffectDefinition);