How Decoders Use IAMVideoAccelerator

 
Microsoft DirectShow 9.0

How Decoders Use IAMVideoAccelerator

The IAMVideoAccelerator interface enables generic video acceleration operations, including DirectX Video Acceleration (VA). For non-DirectX VA acceleration, the decoder and video driver must both adhere to a common protocol.

This section describes the general order of operations that any decoder should follow when using this interface. Further information specific to DirectX VA-based decoders can be found in Mapping DirectX Video Acceleration to IAMVideoAccelerator.

  • **Note   **This interface is available in Microsoft Windows® 2000 and later.

The IAMVideoAccelerator interface is exposed on the input pin of the Overlay Mixer or Video Mixing Renderer (VMR). The IAMVideoAcceleratorNotify interface is exposed on the decoder's output pin. The sequence of events for connecting the filter pins is as follows:

  1. The filter graph manager calls decoder's output pin's IPin::Connect. An AM_MEDIA_TYPE is an optional parameter.
    • AM_MEDIA_TYPE is a data structure that describes a type of media. It contains a majortype GUID (which in our case should be MEDIATYPE_Video), a subtype GUID (which in our case should be a video accelerator GUID), and a variety of other things. One of those things is a format type GUID containing information about the media, including in our case the width and height of an uncompressed video picture, most likely in an MPEG1VIDEOINFO, VIDEOINFOHEADER, MPEG2VIDEOINFO, or VIDEOINFOHEADER2 structure.
    • The AM_MEDIA_TYPE structure, if present, tells the decoder to operate using the specified media type, which may be "fully specified" or "partially specified". If "fully specified," the decoder would normally simply attempt to operate with that media type. If "partially specified," it will attempt to find a "fully-specified" compatible mode of operation that it can use to connect in a manner consistent with the "partially-specified" media type.
    • The ordinary manner for attempting to find a "fully-specified" media type to use for a connection is to simply run through a list of every "fully-specified" media type that the output pin supports which is compatible with the "partially-specified" media type and attempt to connect with each of them until successful. The process would normally be similar if no AM_MEDIA_TYPE is contained in the IPin::Connect call, but with the output pin needing to check all of its media types.
  2. If the decoder wants to check whether a specific AM_MEDIA_TYPE (including a video accelerator GUID) is supported by the downstream input pin, it can call that pin's IPin::QueryAccept (with the video accelerator GUID as the subtype of the AM_MEDIA_TYPE) or it can simply attempt to connect to that pin as described in item 5 below.
  3. If the decoder does not know which video accelerator GUIDs the downstream input pin supports and does not wish to propose just some particular candidate video accelerator GUID by calling the downstream input pin's IPin::QueryAccept, the decoder can call IAMVideoAccelerator::GetVideoAcceleratorGUIDs to get a list of the video accelerator GUIDs the pin supports.
  4. For some particular video accelerator GUIDs, the decoder can call the downstream input pin's IAMVideoAccelerator::GetUncompFormatsSupported to get a list of the DDPIXELFORMAT pixel formats that can be used to render a specific video accelerator GUID. The list returned should be considered to be in decreasing preference order (that is, with the most preferred format listed first).
  5. The decoder calls the downstream input pin's IPin::ReceiveConnection, passing it an AM_MEDIA_TYPE with the proper video accelerator GUID as the subtype of the media type. This sets up the connection for operation, including the creation of the uncompressed output surfaces (which are allocated using the width and height found in AM_MEDIA_TYPE, and the number of surfaces to allocate found by a call described below, and whatever other information the video accelerator has available and wishes to use for that purpose—such as the video accelerator GUID itself). If the downstream input pin rejects the video accelerator GUID or some other aspect of the connection, this can cause the IPin::ReceiveConnection to fail. If IPin::ReceiveConnection fails, this is indicated in a returned HRESULT, and the decoder can try to make the call again, for example, with a new video accelerator GUID in the AM_MEDIA_TYPE structure.
    • **Note   **This is another way (and the most definitive way) for the decoder to determine what is supported by the downstream input pin—simply calling IPin::ReceiveConnection and trying to connect, and then checking whether the connection attempt was successful.
    • During the IPin::ReceiveConnection, the renderer calls the decoder's IAMVideoAcceleratorNotify::GetUncompSurfacesInfo, passing it the video accelerator GUID and an AMVAUncompBufferInfo structure, in order to figure out how many uncompressed surfaces to allocate. The decoder fills in and returns the structure, which contains the minimum and maximum number of surfaces to be allocated of the particular type, and a DDPIXELFORMAT structure describing the pixel format of the surfaces to be allocated.
    • MINOR NOTE: Nothing is actually passed in to the decoder in the call to IAMVideoAcceleratorNotify::GetUncompSurfacesInfo other than the video accelerator GUID.
  6. The renderer calls the decoder's IAMVideoAcceleratorNotify::SetUncompSurfacesInfo, passing to the decoder the actual number of uncompressed surfaces that were allocated.
  7. The renderer calls the decoder's IAMVideoAcceleratorNotify::GetCreateVideoAcceleratorData to get any data needed to initialize the video accelerator.
  8. The decoder calls IAMVideoAccelerator::GetCompBufferInfo, passing it a video accelerator GUID, an AMVAUncompDataInfo structure, and the number of compressed buffer types, to get in return a set of AMVACompBufferInfo data structures, one corresponding to each type of compressed data buffer used by the video accelerator GUID.
    • The AMVAUncompDataInfo structure contains the width and height of the decoded uncompressed data (in pixels) and the DDPIXELFORMAT of the uncompressed picture.

    • The AMVACompBufferInfo data structures returned each contain:

      The number of compressed buffers needed of the specific type.

      The width and height of the surface to create (fields which may or may not have any actual meaning).

      • **Note   **The DirectDraw surface allocation operation for the compressed buffers does not currently provide for the width or height of these surfaces to be greater than or equal to 215, although the surface allocation call may not overtly fail if this limit is violated. Therefore, the driver could structure its requests for compressed buffer memory to avoid such extreme sizes. For example, rather than requesting a buffer with width="1" and height="65536", the driver should request a buffer of width="1024" and height="64".

      The total number of bytes to be used by the surface.

      A structure of type DDSCAPS2 defining a DirectDrawSurface object, describing the capabilities to create surfaces to store compressed data.

      A DDPIXELFORMAT, describing the pixel format used to create surfaces to store compressed data (a field which may or may not have any actual meaning).

The following is a description of IAMVideoAccelerator use during operation after initialization:

  1. For each uncompressed surface, the decoder calls IAMVideoAccelerator::BeginFrame to begin the processing to create the output picture. When it does this, the decoder sends an AMVABeginFrameInfo structure.
    • The AMVABeginFrameInfo structure contains an index for a destination buffer, a pointer to some data to send downstream, and a pointer to a place where the accelerator can put some data for the decoder to read.

    • NOTE 1: The accelerator does not actually receive the destination buffer index, as it is translated by the renderer before going downstream.

    • NOTE 2: IAMVideoAccelerator::BeginFrame can be called more than once between calls to IAMVideoAccelerator::EndFrame.

    • NOTE 3: There is no assumption within the interface operation that IAMVideoAccelerator::BeginFrame and IAMVideoAccelerator::EndFrame need to be called for the processing of every individual picture in the bitstream.

      What IAMVideoAccelerator::BeginFrame does, as far as the interface is concerned, is create an association within the renderer between an index and an uncompressed surface. It also provides a means to call a specific function in a device driver (with support of a means of passing arbitrary data back and forth between the decoder and the device driver).

      (However, in DirectX VA operation there is a requirement described below that IAMVideoAccelerator::BeginFrame and IAMVideoAccelerator::EndFrame do need to be called for the processing of every individual picture in the bitstream.)

  2. For sending uncompressed data to the accelerator, the decoder calls:
    • IAMVideoAccelerator::QueryRenderStatus to determine whether a buffer is safe for reading from or writing to.
    • IAMVideoAccelerator::GetBuffer to lock and obtain access to a specified buffer (if it has not previously called this to get that access). GetBuffer can also be used to get a copy of the contents of the last uncompressed output picture for which IAMVideoAccelerator::BeginFrame was called, providing IAMVideoAccelerator::EndFrame has not been called for that destination buffer index. If the DDI returns a render status of DDERR_WASSTILLDRAWING for the requested buffer, a sleep loop will be operated within GetBuffer until this condition is cleared. In order to call GetBuffer, the decoder will need some information from an AMVACompBufferInfo data structure which is obtained by calling IAMVideoAccelerator::GetCompBufferInfo.
    • IAMVideoAccelerator::Execute to indicate that the data in a set of compressed buffers as indicated in an array of AMVABUFFERINFO data structures should be processed. A function code dwFunction is passed to the driver in this call. A lpPrivateInputData pointer is also passed to some data to send downstream, and a lpPrivateOutputData pointer is passed to a place where the downstream process can put some data for the decoder to read.
    • IAMVideoAccelerator::ReleaseBuffer to indicate that the decoder has completed use of a specified buffer for the moment and no longer needs locked access to the buffer. (If the decoder wishes to continue to use the buffer, it can simply not call IAMVideoAccelerator::ReleaseBuffer for the moment, thus avoiding the need to call IAMVideoAccelerator::GetBuffer until it really intends to not use the buffer anymore.) The decoder should not write into the buffer after Execute is called until QueryRenderStatus indicates that the buffer is safe for writing.
  3. To complete output processing for a destination buffer, the decoder calls IAMVideoAccelerator::EndFrame. It can pass some arbitrary data downstream with this call, and that's essentially all that happens as a result of this call. It doesn't send a destination buffer index in this call, so it can't indicate to the accelerator precisely what destination buffer is completed unless this indication is contained in the arbitrary data that is passed.
  4. To display a frame, the decoder calls IAMVideoAccelerator::DisplayFrame with the index of the frame to display and a IMediaSample structure containing start and stop time stamps and relevant flags such as dwTypeSpecificFlags (in the AM_SAMPLE2_PROPERTIES structure) and dwInterlaceFlags (in the VIDEOINFOHEADER2 structure). The decoder must verify that all decompression operations that affect the content of the frame have completed before calling DisplayFrame.
  5. Finally, the decoder should, upon completion of all processing, indicate completion of all remaining begun output frames by calling IAMVideoAccelerator::EndFrame and release all of its locked buffers by calling IAMVideoAccelerator::ReleaseBuffer for each unreleased buffer.