How to: Enable a project for the Bing Speech Recognition Control
Creating a speech-enabled Windows Store application requires some additional steps in preparing the project.
Prerequisites for Speech Recognition in Windows Store applications
Before creating speech-enabled applications, you must install the speech control from Visual Studio Gallery or from the Visual Studio Extension Manager, as described in How to: Register and install the Bing Speech Recognition Control.
Configuring your project for Speech Recognition
Configuring a project for the Bing Speech Recognition Control requires some edits to the Package.AppxManifest file, as well as adding references and using statements.
To configure your project for Speech
From the main toolbar, set the Solution Platforms dropdown list to a specific platform: either x86, x64, or ARM.
The default value of Any CPU is not supported.
From Solution Explorer, right-click the References folder and select Add Reference…
In the left pane of the Reference Manager, expand the Windows node and select Extensions. Select Bing.Speech from the list and hit OK.
Adding the reference to Bing.Speech also adds a reference to the Microsoft Visual C++ Runtime Package assembly, which is required for Bing.Speech.
If using XAML, open MainPage.xaml.cs and add the following using statement.
From Solution Explorer, right-click the Package.appxmanifest file and select View code.
At the end of the file, before the </Package> tag, add a <Capabilities> section as follows. If there is already a <Capabilities> section, add these capabilities to it.
<Capabilities> <Capability Name="internetClient" /> <DeviceCapability Name="microphone" /> </Capabilities>
Immediately after the <Capabilities> section, add the following <Extensions> section. If there is already an <Extensions> section, add these <Extension> elements to it.
<Extensions> <Extension Category="windows.activatableClass.inProcessServer"> <InProcessServer> <Path>Microsoft.Speech.VoiceService.MSSRAudio.dll</Path> <ActivatableClass ActivatableClassId="Microsoft.Speech.VoiceService.MSSRAudio.Encoder" ThreadingModel="both" /> </InProcessServer> </Extension> <Extension Category="windows.activatableClass.proxyStub"> <ProxyStub ClassId="5807FC3A-A0AB-48B4-BBA1-BA00BE56C3BD"> <Path>Microsoft.Speech.VoiceService.MSSRAudio.dll</Path> <Interface Name="IEncodingSettings" InterfaceId="C97C75EE-A76A-480E-9817-D57D3655231E" /> </ProxyStub> </Extension> <Extension Category="windows.activatableClass.proxyStub"> <ProxyStub ClassId="F1D258E4-9D97-4BA4-AEEA-50A8B74049DF"> <Path>Microsoft.Speech.VoiceService.Audio.dll</Path> <Interface Name="ISpeechVolumeEvent" InterfaceId="946379E8-A397-46B6-B9C4-FBB253EFF6AE" /> <Interface Name="ISpeechStatusEvent" InterfaceId="FB0767C6-7FAA-4E5E-AC95-A3C0C4D72720" /> </ProxyStub> </Extension> </Extensions>
Save and close the file.
Now your application is ready for speech. The next step is to decide whether to use the SpeechRecognizerUx control or create a purely custom UI. The SpeechRecognizerUx control provides UI to show the user where they are in the speech recognition process, along with buttons to interrupt the process if needed, and a Tips area where you can pass suggestions to the user. Creating a custom UI takes more time, but allows you more choice about element placement and appearance. You will have to provide your own UI to start speech recognition and your own result handling in either case.
To enable the SpeechRecognizerUx control
From Solution Explorer, open MainPage.xaml or default.html.
For XAML, in the top level Page element, after the other xmlns entries, add the following declaration.
For HTML, in Solution Explorer, under References, expand the Bing.Speech reference node and its child nodes, and then drag the voiceuicontrol.cs and voiceuicontrol.js references into the <head> element of your html page.
<link href="Bing.Speech/css/voiceuicontrol.css" rel="stylesheet" /> <script src="Bing.Speech/js/voiceuicontrol.js"></script>
For more information, see