How to use XOML in speech applications

Those of you who are more familiar with workflow may have already heard of XOML, but in essence it allows you to specify your activities in an XML language rather than in the traditional 'designer.cs' that is generated for you. This seems to be a general direction Microsoft is taking with programming models as is evident in Avalon.

In my opinion, creating applications using XOML vs. the traditional "don't edit below this line" code is flexibility. It does not take a genious to write a utility to create XML and so one can imagine lots of scenarios where this could be useful. For instance:

1) Say you have many customers and each one has slightly different requirements. You could write an application to dynamically generate the application for each customer.

2) This could allow one to more easily write a utility that separates the business and coding logic. For instance, you could have someone not familiar with coding create the call flow and then pass it to someone familiar with coding. Workflow already has some support for this, though it is not supported by the MSS SDK.

Unfortunately, XOML is not supported by the MSS SDK. This does not mean that you cannot use it, just that it is not supported. I have used it before to test the debugger in the MSS SDK by dynamically creating XOML applications and have found that it does work, but there are some workarounds you will need.

Now, for those that are interested in creating XOML speech applications, I will give step by step instructions for doing this. Before barging in though, there are some things to consider before using XOML.

1) This is not currently supported by Microsoft. While the code should compile to the same method calls, there is not guarantee that you will find no blocking issues with XOML.

2) The designer does not fully support XOML. Therefore, you will not be able to set prompts and grammars in the designer but will need to do this in code.

The following are the steps to create a XOML speech app.

1) Create a new Speech workflow application. This is the same as any other speech project.

2) Delete SpeechWorkflow1.cs and the automatically generated designer file.

3) Right click on the speech workflow project and select Add… New Item…

4) Select ‘Sequential Workflow (with code separation)’. You will now have a .xoml file in your project.

5) Right click on the .xoml file and select ‘Open With’ and then ‘XML Editor’.

The code should look like the following.

<SequentialWorkflowActivity x:Class="SpeechWorkflowApplication2.SpeechWorkflow1" Name="SpeechWorkflow1" xmlns="" xmlns:x="">


6) Change the SequentialWorkflowActivity to a SpeechSequentialWorkflowActivity. In order for this to work, we must also add the namespace where the SpeechSequentialWorkflowActivity can be found.

<ns0:SpeechSequentialWorkflowActivity x:Class="SpeechWorkflowApplication2.SpeechWorkflow1" Name="SpeechWorkflow1" xmlns="" xmlns:x="" xmlns:ns0="clr-namespace:Microsoft.SpeechServer.Dialog;Assembly=Microsoft.SpeechServer, Version=2.0.3200.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">


7) When you create a new Speech Workflow Application, several fault handlers are created automatically for you. We need to now add these fault handlers back.

<ns0:SpeechSequentialWorkflowActivity x:Class="SpeechWorkflowApplication2.SpeechWorkflow1" Name="SpeechWorkflow1" xmlns="" xmlns:x="" xmlns:ns0="clr-namespace:Microsoft.SpeechServer.Dialog;Assembly=Microsoft.SpeechServer, Version=2.0.3200.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">

<FaultHandlersActivity x:Name="faultHandlers">

    <FaultHandlerActivity x:Name="callDisconnectedHandler" FaultType="{x:Type ns0:CallDisconnectedException}">

      <CodeActivity x:Name="callDisconnectedCode" ExecuteCode="HandleCallDisconnected" />


    <FaultHandlerActivity x:Name="generalFaultHandler" FaultType="{x:Type p7:Exception}" xmlns:p7="clr-namespace:System;Assembly=mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089">

     <CodeActivity x:Name="faultCode" ExecuteCode="HandleGeneralFault" />




8) Now double click the xoml.cs file that was added to your project (expand the .xoml file to get to it) and add the following code. This is the same code that is auto generated when you create a new speech application. We are just adding it back.

//This method is called when any exception other than a CallDisconnectedException occurs within the workflow.

//It does some generic exception logging, and is provided for convenience during debugging;

//you should replace or augment this with your own error handling code.

//CallDisconnectedExceptions are handled separately; see HandleCallDisconnected, below.

private void HandleGeneralFault(object sender, EventArgs e)


    //The fault property is read only. When an exception is thrown the actual exception is

    //stored in this property. Check this value for error information.

    string errorMessage = this.generalFaultHandler.Fault.Message;

    if (Debugger.IsAttached)


    //If the debugger is attached, break here

        //so that you can see the error that occurred.

        //(Check the errorMessage variable above.)



    //Write the error to both the NT Event Log and the .etl file,

    //so that some record is kept even if the debugger is not attached.


    50000, //the first parameter is an event id, chosen arbitrarily.

    //MSS uses various IDs below 50000, so you might want to

    //use IDs above this number to avoid overlap.

    "An exception occurred in the Speech workflow with Id" +

    this.WorkflowInstanceId +

    ". The exception was:\n" +


    //Dump a detailed version of the most recent MSS logs to the .etl file

    //(see the MMC for your current MSS logging settings, including the location of this file)



//This method is called when a CallDisconnectedException occurs in the workflow.

//This happens when a speech activity tried to run while the call is not connected,

//and can happen for two reasons:

//(1) The user hung up on the app in the middle of the flow. This is expected and normal.

//(2) You disconnected the call locally, and then attempted to run a speech activity.

// This is an application bug.

private void HandleCallDisconnected(object sender, EventArgs e)


     if (Debugger.IsAttached)


        //If you just hung up on the app, ignore this breakpoint.




9) Add the following using directives at the top of your .cs file and change the base class from SequentialWorkflowActivity to SpeechSequentialWorkflowActivity.

using System.Diagnostics;

using Microsoft.SpeechServer.Dialog;

using Microsoft.SpeechServer;

using Microsoft.SpeechServer.Recognition;

using Microsoft.SpeechServer.Recognition.SrgsGrammar;

10) Build the project to make sure it builds. You may have to change the workflow class name in Class1.cs to match the one in your .xoml and .xoml.cs files.

11) Now close the .xoml file if you still have it open and double click on it to view it in the designer. You will now be able to drag and drop speech activities to the canvas. However, you will not be able to set prompts or grammars using the designer. If you try this, it will seem like it works but the next time you open the .xoml file your changes will be lost. This is because the MSS SDK does not support XOML serialization, which allows the results to be translated back into XML. In order to set a prompt or a grammar, you will need to handle the TurnStarting event for the activity and dynamically add the prompt or grammar there. If the activity is one that does not support TurnStarting, you can set it in the Executing method for the speech sequential workflow.

This information should be enough to get you started. You may at times have to manually edit your .xoml file, for instance if you add an activity library you may need to add the namespace of the library. The code will compile and debug as any other project.