Overview of the Expression Encoder SDK

The Microsoft Expression Encoder SDK gives a developer access to all the tools to create encoded video and audio that will meet their streaming or broadcasting needs. This topic will provide an overview of all the options available to users of the SDK.

Creating offline jobs

Offline jobs are used for encoding batches of media files and performing minor video editing. This mode can only use files; it doesn’t support capturing from a source or the screen.


The following are objects available for offline jobs.

  • Job   This object handles the actual encoding of the media. It handles multithreading and creating streams and determines the output directory for all encoded items. Presets can be applied at the job level, which will then be applied in turn to all MediaItems currently attached to the job. Jobs can be saved and loaded as well, preserving all settings.

  • MediaItem   This is one of the primary objects. Video editing, output type, overlays, captions, etc., are all handled at the mediaItem level.

  • ScriptCommands   These are captions that can be entered one at a time and set to display at different time periods during playback.

  • CaptionFiles   Use this item to import files that contain captions and the time for those captions. It is very similar to importing a series of script commands from a file.

  • Markers   Use markers to set locations in the media file that can be quickly accessed after encoding the file. You can add an optional text comment to the marker point to give more details.

  • OverlayFileName   Sets the name of the media to be used as an overlay. Expression Encoder supports images, video, and audio files of varying types. For more information on supported file types, see Expression Encoder Help. You can customize the time and duration of the overlay. If the overlay is an image or video, you can also set the size and transparency (OverlayOpacity).

  • Sources   This is a sub-object to a MediaItem that always starts with a single source at index 0. Adding another mediafile as a header or trailer to that media item would mean inserting another source before or after the main source file.

  • Clips   Sources can be broken down into clips for editing. Each source starts with one clip. Breaking a clip into multiple parts and changing the start and stop time allows for more precise editing of media sources.

  • Thumbnails   You can get the video thumbnail time from a specific time in the video. You can also extract thumbnails as images that depict the frame displayed at that marker position. The frames are saved as separate image files that some players can use as chapter markers to facilitate seeking during playback.

Sample Code

This code demonstrates creating multiple media files. It also demonstrates the following: overlays, trimming files, inserting media to other files, markers, script commands, and caption files.

using (Job job = new Job())
    // Create media items
    MediaItem media1 = new MediaItem(@"c:\video.wmv");
    MediaItem media2 = new MediaItem(@"c:\video2.mp4");

    // add overlay and set type of duration
    media1.OverlayFileName = @"c:\overlay.avi";
    media1.OverlayLayoutMode = OverlayLayoutMode.WholeSequence;

    // Add markers and scrips
    TimeSpan midpoint = TimeSpan.FromMilliseconds(media1.FileDuration.TotalMilliseconds * 0.5);
    media1.Markers.Add(new Marker(midpoint, "Midpoint"));
    media1.Markers[0].GenerateThumbnail = true;
    media1.ScriptCommands.Add(new ScriptCommand(midpoint, "caption", "Half way through!"));

    // Add trailer to media and adjust original files end time so total file is same duration as original
    media2.Sources.Add(new Source(@"c:\trailer.wmv"));
    media1.Sources[0].Clips[0].EndTime -= media1.Sources[1].MediaFile.Duration;

    // Add caption file
    media2.Sources[0].CaptionFiles.Add(new CaptionFile(@"c:\captionfile.dfxp"));

    // Set thumbnail
    media2.ThumbnailMode = ThumbnailMode.Custom;
    media2.ThumbnailTime = TimeSpan.FromMinutes(media2.FileDuration.TotalMinutes * 0.75);

    // add mediaItems to job and encode

Relevant SDK samples

The following samples show how some of the SDK objects can be used to edit and encode media. Click any link to read the respective topic.

Working with Live Broadcasting projects

Live broadcasting refers to the functionality of capturing audio and video from a live source, a file-based source, or the computer screen, and then broadcasting the final encoded media from a port or to a server. You can also save your broadcast directly to a file. This mode doesn’t support many of the editing options that are available in offline mode.


The following are objects available for Live Broadcasting projects.

  • LiveJob   Unlike its offline counterpart, this object has much more importance in a Live Broadcasting Project. This object contains references to all media and devices to be encoded, the output format, and the format to be encoded, e.g. streaming to a server, broadcasting from a port, or archiving to disc storage.

  • LiveSources   These objects are for either LiveFileSource or LiveDeviceSource and hold a reference for the job. Activating a source sets it to be the initial source that is encoded, or if already encoding is already occurring, sets the current source to be the next source that is encoded. You can also set how a LiveFileSource behaves once the source completes playback. You can choose to have the source hold at the last frame, loop, or jump to another source.

Sample Code

This code demonstrates getting device lists, setting devices, adding file sources, setting file source behavior, setting output format and publishing types.

using (LiveJob job = new LiveJob())
    // Get list of all video and audio devices that can be used
    Collection<EncoderDevice> videoDevs = EncoderDevices.FindDevices(EncoderDeviceType.Video);
    Collection<EncoderDevice> audioDevs = EncoderDevices.FindDevices(EncoderDeviceType.Audio);

    // Set desired devices to create live device source
    // if audio or video only is desired, set other value to null
    LiveDeviceSource deviceSource = job.AddDeviceSource(videoDevs.Count > 0 ? videoDevs[0] : null, audioDevs.Count > 0 ? audioDevs[0] : null);

    // Create live file source and set behavior for end of file
    LiveFileSource fileSource = job.AddFileSource(@"c:\video.avi");
    fileSource.JumpTo = deviceSource;

    // Sets the source to be used during streaming.
    // This can be changed during encode to other sources.

    // set output for encode. In this case IIS Smooth Streaming MainH264 with HeAAC audio
    job.OutputFormat = new MP4OutputFormat()
        VideoProfile = new MainH264VideoProfile() { SmoothStreaming = true },
        AudioProfile = new AacAudioProfile() { Level = AacLevel.AacHELevel2 }

    // set publish formats to job
    PushBroadcastPublishFormat push = new PushBroadcastPublishFormat()
        PublishingPoint = new Uri("http://publishpoint.com/test.isml")
    PullBroadcastPublishFormat pull = new PullBroadcastPublishFormat() { BroadcastPort = 9090, MaximumNumberOfConnections = 5 };
    FileArchivePublishFormat archive = new FileArchivePublishFormat() { OutputFileName = @"c:\output.ismv" };


    // Starts and stops encoding after keypressed
    job.SendScriptCommand(new Microsoft.Expression.Encoder.Live.ScriptCommand("caption", "Streaming now!"));

Relevant SDK samples

The following samples demonstrate how some of the SDK objects can be used to work with Live Broadcasting projects. Click any link to advance to the respective topic.

Working with Screen Capture

Use Screen Capture to record all your on-screen actions. You can capture anything from a still image of a dialog box, to a complete motion tutorial depicting all the actions that you can perform in a particular application. You can also simultaneously capture both the video feed from your webcam and the audio from a microphone.


The following object is available for Screen Capture.

  • ScreenCaptureJob   This is the main object for this type of encoding. The output directory, capture size, duration, and any additional audio or video devices must be set through the screen capture job.

Sample Code

This code sets up a basic screen capture and simultaneously sets to capture from a device. It also sets the audio input and output file names.

using (ScreenCaptureJob job = new ScreenCaptureJob())
    // Sets output directory for job. filenames are generated with date and time stamps
    job.OutputPath = @"c:\outputDirectory";

    // Sets capture rectangle. This defaults to full screen
    job.CaptureRectangle = new Rectangle(100, 200, 250, 150);

    // set the duration of the capture this can be set or the job.Stop() method can be called to manually stop
    job.Duration = TimeSpan.FromMinutes(10);

    // Gets all video devices and selects a Microsoft webcam
    Collection<EncoderDevice> videoDevices = EncoderDevices.FindDevices(EncoderDeviceType.Video);
    foreach (EncoderDevice device in videoDevices)
        if (device.Name.Contains("Microsoft"))
            job.VideoDeviceSource = device;

    // Iterates through all audio devices and selects the internal audio
    foreach (EncoderDevice device in job.AudioDeviceSources)
        if (device.Name.Contains("Speakers"))

    // Starts capture

Relevant SDK samples

The following sample demonstrates how some of the SDK objects can be used to work with Screen Capture. Click the link to advance to the respective topic.

See also


Getting started
What's new in version 4
What's new in version 4 SP1

   © 2011 Microsoft Corporation. All rights reserved.