Getting started

With Microsoft Expression Encoder, you can work with audio and video files by using the Expression Encoder object model (OM), which is based on the Microsoft .NET Object Model Framework. To work with the Expression Encoder OM, you must have Expression Encoder, and we recommend that you use Microsoft Visual Studio 2010 for coding.

Note

You can use an integrated development environment (IDE) or text editor with a compiler to create applications that use the Expression Encoder OM.

You can access most Expression Encoder features through the OM. The OM is designed to make the functionality of Expression Encoder available without having to write much code. Using the OM, you can use different job types to encode your media: Transcoding, Live Broadcasting, and Screen Capture.

To create a project in Visual Studio

  1. In Visual Studio, click File, point to New, and then click Project.

  2. For this example, in the Project dialog box, under Visual C# in the Project Types list, click Windows. Under Templates, click Console Application.

  3. In the Name text box, type a name for your project. For this example, use MyEncoderApplication.

  4. Click OK.

Before you can use the Expression Encoder OM in Visual Studio, you must add references to the Expression Encoder assemblies.

To add the Expression Encoder assemblies in Visual Studio

  1. In Visual Studio, click Project, and then click Add Reference.

  2. In the Add Reference dialog box, click the .NET tab at the top.

  3. Press and hold the CTRL key, and then click Microsoft.Expression.Encoder, Microsoft.Expression.Encoder.Api2, Microsoft.Expression.Encoder.Types, and Microsoft.Expression.Encoder.Utilities.

  4. Click OK.

Now that you have added the references to the Expression Encoder assemblies, you are ready to start coding. The following C# code sample uses the Expression Encoder OM to create a job, add a video file to the job, and then encode the video.

Running the Simple example

This example creates a job, imports a media item, encodes that item with default presets, and saves the job to a local folder. The application displays its progress on the screen as it encodes.

If you follow the comments in the code, you can see the outline of the steps for encoding a video by using C# and the Expression Encoder OM. The following code has six steps:

  1. Identify the media sources that you want to process.

  2. Create a job to process the media sources, and then add the media sources.

  3. Identify the location for the output.

  4. Optionally, add a progress callback function to view the encoding progress.

  5. Execute the project.

  6. Clean up the job.

using Microsoft.Expression.Encoder;

static void Main(string[] args)
{
    MediaItem mediaItem = new MediaItem(@"C:\videoInput\video.wmv");

    //Creates job and media item for the video to be encoded
    Job job = new Job();
    job.MediaItems.Add(mediaItem);

    //Sets output directory
    job.OutputDirectory = @"C:\videoOutput";

    //Sets up progress callback function
    job.EncodeProgress += new EventHandler<EncodeProgressEventArgs>(OnProgress);

    //Encodes
    Console.WriteLine("Encoding…");
    job.Encode();
    Console.WriteLine("Finished encoding.");
    job.Dispose();
}

static void OnProgress(object sender, EncodeProgressEventArgs e)
{
    Console.Write("\b\b\b\b\b\b\b");
    Console.Write("{0:F2}%", e.Progress);
}

Explaining the Simple example

In Visual Studio, use the using statement to declare that the Expression Encoder namespace is being used. At the top of the file that you just created, locate the other using statements, and then type using Microsoft.Expression.Encoder. The code at the top of the page should now resemble the following.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Expression.Encoder;

Next, create an instance of a MediaItem class. The MediaItem class passes the file name to the constructor, which you can now use to locate a video or audio file, extract relevant information about the file, and encode it.

Locate the following code.

namespace MyEncoderApplication
{
    class Program
    {
        static void Main(string[] args)
        {

After the last curly brace, add the following declaration, replacing the generic path with the path of the video that you want to encode.

MediaItem mediaItem = new MediaItem(@"c:\[file path]\[file name]");

Note

Now that you have declared the MediaItem class, you can access the properties and methods associated with that item. Type mediaItem followed by a period (.) to access the IntelliSense list of properties and methods associated with the file, such as FileDuration, OriginalAspectRatio, and OriginalVideoSize.

Now that you have declared the MediaItem class, your next steps are to create a job and then to add the media file to the job.

After the declaration, add the following code.

Job job = new Job();
job.MediaItems.Add(mediaItem);

If you want to encode multiple items, you must declare multiple MediaItem classes and then repeat job.MediaItems.Add(mediaItem); until you have added all the files that you want to encode. Each MediaItem class must have a unique name, such as mediaItem1 or mediaItem2. Before you begin encoding the files, set the directory in which you want to save the encoded files by setting the OutputDirectory property. In your code, after the list of media items that include your job, add the following code.

job.OutputDirectory = @"C:\[file path]";

By default, Expression Encoder creates subfolders in the directory in which it stores the encoded files. You can stop Expression Encoder from creating subfolders (and therefore saving the files in the directory itself) by changing the CreateSubfolder property. To change the CreateSubfolder property, type the following:

job.CreateSubfolder = false;

For this example, use the default setting, "true."

If you want to show the progress of your encoding project, you can add a progress event handler function. Define the function by adding the following code after the job.Encode function and the closing curly brace.

static void OnProgress(object sender, EncodeProgressEventArgs e) 
{ 
    Console.Write("\b\b\b\b\b\b\b"); 
    Console.Write("{0:F2}%", e.Progress); 
}

Scroll up several lines in the code and add a call to this function after the job is created. The following call to the function appears after the output directory and before the call to encode.

job.EncodeProgress += new EventHandler<EncodeProgressEventArgs>(OnProgress);

Now you're ready to encode the file. Add the following code after the OutputDirectory declaration (and the CreateSubfolder declaration, if you chose to include it). After the file is encoded, clean up the job object to release resources.

job.Encode();
job.Dispose();

Running the Live Broadcasting example

This example sets up a LiveJob, adds a media source to it, and streams that source through a broadcast port until the user stops the broadcast.

If you follow the comments in the code, you can see the outline of the steps for encoding a video by using C# and the Expression Encoder OM. The following code has seven steps:

  1. Create a job to process the media sources.

  2. Create and add a source for encoding.

  3. Set the playback mode.

  4. Activate the source.

  5. Set the publishing format.

  6. Encode.

  7. Stop encoding on user prompt.

using Microsoft.Expression.Encoder;
using Microsoft.Expression.Encoder.Live;

namespace Live
{
    class Program
    {
        static void Main(string[] args)
        {
            // Creates a new LiveJob. LiveJobs are IDisposable objects. With the using statement, 
            // the clean-up is handled automatically.  
            using (LiveJob job = new LiveJob())
            {
                // Creates file source for encoding
                LiveFileSource fileSource = job.AddFileSource(@"C:\videoInput\video.wmv");

                // Sets playback to loop on reaching the end of the file
                fileSource.PlaybackMode = FileSourcePlaybackMode.Loop;

                // Sets this source as the current active one
                job.ActivateSource(fileSource);

                // Creates the publishing format for the job
                PullBroadcastPublishFormat format = new PullBroadcastPublishFormat();
                format.BroadcastPort = 8080;

                // Adds the publishing format to the job
                job.PublishFormats.Add(format);

                // Starts encoding
                job.StartEncoding();
                Console.Write("Press 'x' to stop streaming…");
                while (Console.ReadKey(true).Key != ConsoleKey.X) 
                    ;

                Console.WriteLine("Streaming stopped.");
                job.StopEncoding();
            }
        }
    }
}

Explaining the Live Broadcasting example

Just as in the last example, you have to add using statements to declare that the Expression Encoder and Live namespaces are being used. At the top of the file that you just created, locate the other using statements, and then add using Microsoft.Expression.Encoder and using Microsoft.Expression.Encoder.Live. The code at the top of the page should now resemble the following.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Expression.Encoder;
using Microsoft.Expression.Encoder.Live;

Next, you have to create a new LiveJob instance. In this case, you are enclosing this in the using statement, which will handle cleaning up the job afterward. You could instead have declared a job and disposed of it as in the first example. In a LiveJob, you must create the job before you can create the source. Unlike with Transcoding jobs, all sources must be registered through the job, in part because the job determines the publishing format for all items in that job.

Add these lines inside the curly braces under the Main(string[] args) method.

using(LiveJob job = new LiveJob())
{
    LiveFileSource fileSource = job.AddFileSource(@"C:\videoInput\video.wmv");

You have now created a job and added a source to the job. The LiveFileSource variable fileSource gives you access to information about that media item in addition to setting some actions. Just as with a Transcoding job, all LiveFileSource variables should have unique names.

Note

In addition to files, you can use devices, such as microphones and cameras, that are attached to the computer. This is unique to the Live experience. For more information, see the LiveSourceSample in the SDK folder, along with its documentation.

Now that you have added a source, you can set what action to take after the media file completes playback.

fileSource.PlaybackMode = FileSourcePlaybackMode.Loop;

The default action that a source takes when the playback of the file finishes is to pause on the last frame. The other options are to have the source continuously loop or jump to another file or device. In this example, you chose to have the source loop the item until you end your broadcast. However, if you want to display additional file-based or device-based sources during your broadcast, you could also set them to begin playing when another file finishes playing, as shown in the following example.

fileSource.PlaybackMode = FileSourcePlaybackMode.JumpTo;
fileSource.JumpTo = otherFileSource;

In the next step, you set the active source, which is the source that the job encodes first.

job.ActivateSource(fileSource);

The last thing to do before you start encoding is to determine what type of output you want. There are three different types of publishing formats available in Live: Pull, FileArchive, and Push.

Broadcast (Pull)

PullBroadcastPublishFormat format = new PullBroadcastPublishFormat();
format.BroadcastPort = 8080;

job.PublishFormats.Add(format);

This is the publishing type selected for the sample. Set the port that you are broadcasting from and the maximum number of other computers that can connect. The default maximum number of connections is ten.

Archive (FileArchive)

FileArchivePublishFormat format = new FileArchivePublishFormat();
format.OutputFileName = "output.wmv";

job.PublishFormats.Add(format);

Archiving saves the encoded media to a physical disk on your computer or network. Remember to include the output extension, and make sure that it matches the type of encoding that you are performing. If you are encoding using a VC-1 codec, use the file name extension .wmv. If you are encoding using MP4, use the file name extension .mp4. If you are using Smooth Streaming, use the file name extension .ismv.

Publish

PushBroadcastPublishFormat format = new PushBroadcastPublishFormat();
format.PublishingPoint = http://publishPoint.isml

job.PublishFormats.Add(format);

Publishing requires having a server set up with a publishing point already established. You must type the address provided by the server as the publishing point. If a user name and password are required, the format supports those also.

For more information about setting up publishing points, see the Expression Encoder User Guide.

In each of these cases, you add the publishing format to the job after setting the format's required properties. In this manner, you can add multiple publishing formats to a job. Note that using multiple formats can require more resources.

job.StartEncoding();
Console.WriteLine("Press 'x' to stop encoding…");
while(Console.ReadKey(true).Key != ConsoleKey.X ;

Console.WriteLine("Encoding stopped.");
job.StopEncoding();

Finally, you start encoding. Although there is no graphic representation of the encoding progress, you can choose to halt encoding at any time by pressing the key indicated on a screen that displays during your broadcast session. Encoding occurs on a separate logic thread, so the displaying of the screen will not interrupt the encoding process.

Display showing how to end the streaming process

Ending the streaming process.

Running the ScreenCapture example

This example records all the actions that occur within a specified rectangular section of the screen until the user prompts Expression Encoder to stop capturing. The sample saves this recording to a local folder. The following code has five steps:

  1. Create a job to capture the action on the screen.

  2. Create and set the size and coordinates of the capturing rectangle.

  3. Set the output directory for the capture.

  4. Start the capture.

  5. Stop the capture at the user's prompt.

using Microsoft.Expression.Encoder.ScreenCapture;

namespace MyEncoderApplication
{
    class Program
    {
        static void Main(string[] args)
        {
            // Creates new job
            using (ScreenCaptureJob job = new ScreenCaptureJob())
            {
                // Sets the top right coordinates of the capture rectangle
                int topRightX = 200;
                int topRightY = 200;
                // Sets the bottom left coordinates of the capture rectangle
                int BottomLeftX = topRightX + 300;
                int BottomLeftY = topRightY + 150;

                job.CaptureRectangle = new Rectangle(topRightX, topRightY, BottomLeftX, BottomLeftY);

                job.ShowFlashingBoundary = true;
                job.OutputPath = @"c:\output";
                job.Start();

                Console.WriteLine("Press 'x' to stop recording.");
                while (Console.ReadKey(true).Key != ConsoleKey.X) ;

                Console.WriteLine("Recording stopped.");
                job.Stop();
            }
        }
    }
}

Explaining the ScreenCapture example

As with the first two examples, you have to add using statements to declare that the Expression Encoder ScreenCapture namespace is being used. In addition, if you want to define a custom-capture rectangle, you have to use the Drawing namespace. At the top of the file that you just created, locate the other using statements, and then add using Microsoft.Expression.Encoder.ScreenCapture and System.Drawing, as shown in the following example.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Expression.Encoder.ScreenCapture;

Next, you create the job. Just as in the Live example, use a using statement.

static void Main(string[] args)
{
    using(ScreenCaptureJob job = new ScreenCaptureJob())
    {

Optionally, you can create a set of coordinates to capture a certain area of the screen. The default setting is to capture the full screen. In this case, you set the x-coordinate and the y-coordinate of the upper right and the coordinates of the lower left to create the rectangular capture range. This example uses the starting position in the second set of coordinates so that if you want to move the rectangle without changing the size, only one set of coordinates would have to be changed.

int topRightX = 200;
int topRightY = 200;
int BottomLeftX = topRightX + 300;
int BottomLeftY = topRightY + 150;

job.CaptureRectangle = new Rectangle(topRightX, topRightY, BottomLeftX, BottomLeftY);

At this point, you can choose to display the boundary for the capture area. The only remaining step before capturing is to set the output path for the capture to be stored.

job.ShowFlashingBoundary = true;
job.OutputPath = @"c:\output";

   © 2011 Microsoft Corporation. All rights reserved.