Azure Media Services v3 release notes
Get notified about when to revisit this page for updates by copying and pasting this URL:
https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+v3+release+notes%22&locale=en-usinto your RSS feed reader.
To stay up-to-date with the most recent developments, this article provides you with information about:
- The latest releases
- Known issues
- Bug fixes
- Deprecated functionality
You can use the Azure portal to manage v3 live events, view v3 assets and jobs, get info about accessing APIs, encrypt content. For all other management tasks (for example, manage transforms and jobs), use the REST API, CLI, or one of the supported SDKs.
For details, see: the Azure portal limitations for Media Services v3.
Basic Audio Analysis
The Audio Analysis preset now includes a Basic mode pricing tier. The new Basic Audio Analyzer mode provides a low-cost option to extract speech transcription, and format output captions and subtitles. This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription,and timing information. Automatic language detection and speaker diarization are not included in this mode. See the list of supported languages.
Customers using Indexer v1 and Indexer v2 should migrate to the Basic Audio Analysis preset.
For more information about the Basic Audio Analyzer mode, see Analyzing Video and Audio files. To learn to use the Basic Audio Analyzer mode with the REST API, see How to Create a Basic Audio Transform.
Updates to most properties are now allowed when live events are stopped. In addition, users are allowed to specify a prefix for the static hostname for the live event's input and preview URLs. VanityUrl is now called
useStaticHostName to better reflect the intent of the property.
Live events now have a StandBy state. See Live Events and Live Outputs in Media Services.
A live event supports receiving various input aspect ratios. Stretch mode allows customers to specify the stretching behavior for the output.
Live encoding now adds the capability of outputting fixed key frame interval fragments between 0.5 to 20 seconds.
If you create a Media Services account with the 2020-05-01 API version it won’t work with RESTv2
Support for the legacy PlayReady Protected Interoperable File Format (PIFF 1.1) encryption is now available in the Dynamic Packager. This provides support for legacy Smart TV sets from Samsung and LG that implemented the early drafts of the Common Encryption standard (CENC) published by Microsoft. The PIFF 1.1 format is also known as the encryption format that was previously supported by the Silverlight client library. Today, the only use case scenario for this encryption format is to target the legacy Smart TV market where there remains a non-trivial number of Smart TVs in some regions that only support Smooth Streaming with PIFF 1.1 encryption.
To use the new PIFF 1.1 encryption support, change the encryption value to 'piff' in the URL path of the Streaming Locator. For more details, see the Content Protection overview.
PIFF 1.1 support is provided as a backwards compatible solution for Smart TV (Samsung, LG) that implemented the early "Silverlight" version of Common Encryption. It is recommended to only use the PIFF format where needed for support of legacy Samsung or LG Smart TVs shipped between 2009-2015 that supported the PIFF 1.1 version of PlayReady encryption.
Live Transcriptions now supports 19 languages and 8 regions.
Protecting your content with Media Services and Azure AD
We published a tutorial called End-to-End content protection using Azure AD.
Live Video Analytics on IoT Edge preview release
The preview of Live Video Analytics on IoT Edge went public. For more information, see release notes.
Live Video Analytics on IoT Edge is an expansion to the Media Service family. It enables you to analyze live video with AI models of your choice on your own edge devices, and optionally capture and record that video. You can now build apps with real-time video analytics at the edge without worrying about the complexity of building and operating a live video pipeline.
Azure Media Services is now generally available in the following regions: "Germany North", "Germany West Central", "Switzerland North", and "Switzerland West". Customers can deploy Media Services to these regions using the Azure portal.
Improvements in documentation
Azure Media Player docs were migrated to the Azure documentation.
Improvements in media processors
- Improved support for interlaced sources in Video Analysis – such content is now de-interlaced correctly before being sent to inference engines.
- When generating thumbnails with the “Best” mode, the encoder now searches beyond 30 seconds to select a frame that is not monochromatic.
Azure Government cloud updates
Media Services GA’ed in the following Azure Government regions: USGov Arizona and USGov Texas.
Added CDN support for Origin-Assist Prefetch headers for both live and video on-demand streaming; available for customers who have direct contract with Akamai CDN. Origin-Assist CDN-Prefetch feature involves the following HTTP header exchanges between Akamai CDN and Azure Media Services origin:
|CDN-Origin-Assist-Prefetch-Enabled||1 (default) or 0||CDN||Origin||To indicate CDN is prefetch enabled|
|Origin||CDN||To provide prefetch path to CDN|
|CDN-Origin-Assist-Prefetch-Request||1 (prefetch request) or 0 (regular request)||CDN||Origin||To indicate the request from CDN is a prefetch|
To see part of the header exchange in action, you can try the following steps:
- Use Postman or curl to issue a request to Media Services origin for an audio or video segment or fragment. Make sure to add the header CDN-Origin-Assist-Prefetch-Enabled: 1 in the request.
- In the response, you should see the header CDN-Origin-Assist-Prefetch-Path with a relative path as its value.
Live transcription Preview
Live transcription is now in public preview and available for use in the West US 2 region.
Live transcription is designed to work in conjunction with live events as an add-on capability. It is supported on both pass-through and Standard or Premium encoding live events. When this feature is enabled, the service uses the Speech-To-Text feature of Cognitive Services to transcribe the spoken words in the incoming audio into text. This text is then made available for delivery along with video and audio in MPEG-DASH and HLS protocols. Billing is based on a new add-on meter that is additional cost to the live event when it is in the "Running" state. For details on Live transcription and billing, see Live transcription
Currently, live transcription is only available as a preview feature in the West US 2 region. It supports transcription of spoken words in English (en-us) only at this time.
The Token Replay Prevention feature released in limited regions back in September is now available in all regions. Media Services customers can now set a limit on the number of times the same token can be used to request a key or a license. For more information, see Token Replay Prevention.
New recommended live encoder partners
Added support for the following new recommended partner encoders for RTMP live streaming:
File Encoding enhancements
- A new Content Aware Encoding preset is now available. It produces a set of GOP-aligned MP4s by using content-aware encoding. Given any input content, the service performs an initial lightweight analysis of the input content. It uses those results to determine the optimal number of layers, appropriate bit rate, and resolution settings for delivery by adaptive streaming. This preset is particularly effective for low-complexity and medium-complexity videos, where the output files are at lower bit rates but at a quality that still delivers a good experience to viewers. The output will contain MP4 files with video and audio interleaved. For more information, see the open API specs.
- Improved performance and multi-threading for the resizer in Standard Encoder. Under specific conditions, customer should see a performance boost between 5-40% VOD encoding. Low complexity content encoded into multiple bit-rates will see the highest performance increases.
- Standard encoding now maintains a regular GOP cadence for variable frame rate (VFR) contents during VOD encoding when using the time-based GOP setting. This means that customer submitting mixed frame rate content that varies between 15-30 fps, for example, should now see regular GOP distances calculated on output to adaptive bitrate streaming MP4 files. This will improve the ability to switch seamlessly between tracks when delivering over HLS or DASH.
- Improved AV sync for variable frame rate (VFR) source content
Video Indexer, Video analytics
- Keyframes extracted using the VideoAnalyzer preset are now in the original resolution of the video instead of being resized. High-resolution keyframe extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video.
Media Services v3
Live linear encoding of live events
Media Services v3 is announcing the preview of 24 hrs x 365 days of live linear encoding of live events.
Media Services v2
Deprecation of media processors
We are announcing deprecation of Azure Media Indexer and Azure Media Indexer 2 Preview. For the retirement dates, see the legacy components article. Azure Media Services Video Indexer replaces these legacy media processors.
For more information, see Migrate from Azure Media Indexer and Azure Media Indexer 2 to Azure Media Services Video Indexer.
Media Services v3
South Africa regional pair is open for Media Services
Media Services is now available in South Africa North and South Africa West regions.
For more information, see Clouds and regions in which Media Services v3 exists.
Media Services v2
Deprecation of media processors
We are announcing deprecation of the Windows Azure Media Encoder (WAME) and Azure Media Encoder (AME) media processors, which are being retired. For the retirement dates, see this legacy components article.
When streaming content protected with token restriction, end users need to obtain a token that is sent as part of the key delivery request. The Token Replay Prevention feature allows Media Services customers to set a limit on how many times the same token can be used to request a key or a license. For more information, see Token Replay Prevention.
As of July, the preview feature was only available in US Central and US West Central.
You can now trim or subclip a video when encoding it using a Job.
Azure Monitor support for Media Services diagnostic logs and metrics
You can now use Azure Monitor to view telemetry data emitted by Media Services.
- Use the Azure Monitor diagnostic logs to monitor requests sent by the Media Services Key Delivery endpoint.
- Monitor metrics emitted by Media Services Streaming Endpoints.
For details, see Monitor Media Services metrics and diagnostic logs.
Multi audio tracks support in Dynamic Packaging
When streaming Assets that have multiple audio tracks with multiple codecs and languages, Dynamic Packaging now supports multi audio tracks for the HLS output (version 4 or above).
Korea regional pair is open for Media Services
Media Services is now available in Korea Central and Korea South regions.
For more information, see Clouds and regions in which Media Services v3 exists.
Added updates that include Media Services performance improvements.
- The maximum file size supported for processing was updated. See, Quotas, and limits.
- Encoding speeds improvements.
- FaceDetectorPreset was added to the built-in analyzer presets.
- ContentAwareEncodingExperimental was added to the built-in encoder presets. For more information, see Content-aware encoding.
Dynamic Packaging now supports Dolby Atmos. For more information, see Audio codecs supported by dynamic packaging.
You can now specify a list of asset or account filters, which would apply to your Streaming Locator. For more information, see Associate filters with Streaming Locator.
Media Services v3 is now supported in Azure national clouds. Not all features are available in all clouds yet. For details, see Clouds and regions in which Azure Media Services v3 exists.
Microsoft.Media.JobOutputProgress event was added to the Azure Event Grid schemas for Media Services.
Media Encoder Standard and MPI files
When encoding with Media Encoder Standard to produce MP4 file(s), a new .mpi file is generated and added to the output Asset. This MPI file is intended to improve performance for dynamic packaging and streaming scenarios.
You should not modify or remove the MPI file, or take any dependency in your service on the existence (or not) of such a file.
Updates from the GA release of the V3 API include:
- The PresentationTimeRange properties are no longer 'required' for Asset Filters and Account Filters.
- The $top and $skip query options for Jobs and Transforms have been removed and $orderby was added. As part of adding the new ordering functionality, it was discovered that the $top and $skip options had accidentally been exposed previously even though they are not implemented.
- Enumeration extensibility was re-enabled. This feature was enabled in the preview versions of the SDK and got accidentally disabled in the GA version.
- Two predefined streaming policies have been renamed. SecureStreaming is now MultiDrmCencStreaming. SecureStreamingWithFairPlay is now Predefined_MultiDrmStreaming.
The CLI 2.0 module is now available for Azure Media Services v3 GA – v 2.0.50.
- az ams account
- az ams account-filter
- az ams asset
- az ams asset-filter
- az ams content-key-policy
- az ams job
- az ams live-event
- az ams live-output
- az ams streaming-endpoint
- az ams streaming-locator
- az ams account mru - enables you to manage Media Reserved Units. For more information, see Scale Media Reserved Units.
New features and breaking changes
- Default values for expiry time (Now+23h) and permissions (Read) in
az ams asset get-sas-urlcommand added.
--output-assets. Now it accepts a space-separated list of assets in 'assetName=label' format. An asset without label can be sent like this: 'assetName='.
Streaming Locator commands
az ams streaming locatorbase command replaced with
az ams streaming-locator.
--alternative-media-id supportarguments added.
--content-keys argumentargument updated.
Streaming Policy commands
az ams streaming policybase command replaced with
az ams streaming-policy.
- Encryption parameters support in
az ams streaming-policy createadded.
--preset-namesargument replaced with
--preset. Now you can only set 1 output/preset at a time (to add more you have to run
az ams transform output add). Also, you can set custom StandardEncoderPreset by passing the path to your custom JSON.
az ams transform output removecan be performed by passing the output index to remove.
--relative-priority, --on-error, --audio-language and --insights-to-extractarguments added in
az ams transform createand
az ams transform output addcommands.
October 2018 - GA
This section describes Azure Media Services (AMS) October updates.
REST v3 GA release
The REST v3 GA release includes more APIs for Live, Account/Asset level manifest filters, and DRM support.
Azure Resource Management
Support for Azure Resource Management enables unified management and operations API (now everything in one place).
Starting with this release, you can use Resource Manager templates to create Live Events.
Improvement of Asset operations
The following improvements were introduced:
- Ingest from HTTP(s) URLs or Azure Blob Storage SAS URLs.
- Specify you own container names for Assets.
- Easier output support to create custom workflows with Azure Functions.
New Transform object
The new Transform object simplifies the Encoding model. The new object makes it easy to create and share encoding Resource Manager templates and presets.
Azure Active Directory authentication and Azure RBAC
Azure AD Authentication and Azure role-based access control (Azure RBAC) enable secure Transforms, LiveEvents, Content Key Policies, or Assets by Role or Users in Azure AD.
Languages supported in Media Services v3: .NET Core, Java, Node.js, Ruby, Typescript, Python, Go.
Live encoding updates
The following live encoding updates are introduced:
New low latency mode for live (10 seconds end-to-end).
Improved RTMP support (increased stability and more source encoder support).
RTMPS secure ingest.
When you create a Live Event, you now get 4 ingest URLs. The 4 ingest URLs are almost identical, have the same streaming token (AppId), only the port number part is different. Two of the URLs are primary and backup for RTMPS.
24-hour transcoding support.
Improved ad-signaling support in RTMP via SCTE35.
Improved Event Grid support
You can see the following Event Grid support improvements:
- Azure Event Grid integration for easier development with Logic Apps and Azure Functions.
- Subscribe for events on Encoding, Live Channels, and more.
CMAF and 'cbcs' encryption support for Apple HLS (iOS 11+) and MPEG-DASH players that support CMAF.
Video Indexer GA release was announced in August. For new information about currently supported features, see What is Video Indexer.
Plans for changes
Azure CLI 2.0
The Azure CLI 2.0 module that includes operations on all features (including Live, Content Key Policies, Account/Asset Filters, Streaming Policies) is coming soon.
Only customers that used the preview API for Asset or AccountFilters are impacted by the following issue.
If you created Assets or Account Filters between 09/28 and 10/12 with Media Services v3 CLI or APIs, you need to remove all Asset and AccountFilters and re-create them due to a version conflict.
May 2018 - Preview
The following features are present in the .NET SDK:
- Transforms and Jobs to encode or analyze media content. For examples, see Stream files and Analyze.
- Streaming Locators for publishing and streaming content to end-user devices
- Streaming Policies and Content Key Policies to configure key delivery and content protection (DRM) when delivering content.
- Live Events and Live Outputs to configure the ingest and archiving of live streaming content.
- Assets to store and publish media content in Azure Storage.
- Streaming Endpoints to configure and scale dynamic packaging, encryption, and streaming for both live and on-demand media content.
- When submitting a job, you can specify to ingest your source video using HTTPS URLs, SAS URLs, or paths to files located in Azure Blob storage. Currently, Media Services v3 does not support chunked transfer encoding over HTTPS URLs.
Ask questions, give feedback, get updates
Check out the Azure Media Services community article to see different ways you can ask questions, give feedback, and get updates about Media Services.