您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

使用语音 SDK 与客户端应用程序集成Integrate with a client application using Speech SDK

本文介绍如何从 UWP 应用程序中运行的语音 SDK 向已发布的自定义命令应用程序发出请求。In this article, you learn how to make requests to a published Custom Commands application from the Speech SDK running in an UWP application. 为了建立与自定义命令应用程序的连接,需要:In order to establish a connection to the Custom Commands application, you need:

  • 发布自定义命令应用程序,并获取应用程序标识符(应用 ID)Publish a Custom Commands application and get an application identifier (App ID)
  • 使用语音 SDK 创建一个通用 Windows 平台 (UWP) 客户端应用,以便你能够与自定义命令应用程序通信Create a Universal Windows Platform (UWP) client app using the Speech SDK to allow you to talk to your Custom Commands application

必备条件Prerequisites

若要完成本文,需要自定义命令应用程序。A Custom Commands application is required to complete this article. 如果尚未创建自定义命令应用程序,则可以按照以下快速入门中的说明执行此操作:If you haven't created a Custom Commands application, you can do so following the quickstarts:

还需要:You'll also need:

步骤 1:发布自定义命令应用程序Step 1: Publish Custom Commands application

  1. 打开之前创建的自定义命令应用程序Open your previously created Custom Commands application

  2. 转到“设置”,选择“LUIS 资源” Go to Settings, select LUIS resource

  3. 如果未分配“预测资源”,请选择一个查询预测密钥或创建一个新的查询预测密钥If Prediction resource is not assigned, select a query prediction key or create a new one

    在发布应用程序之前,始终需要查询预测密钥。Query prediction key is always required before publishing an application. 有关 LUIS 资源的详细信息,请参阅创建 LUIS 资源For more information about LUIS resources, reference Create LUIS Resource

  4. 返回到编辑命令,选择“发布”Go back to editing Commands, Select Publish

    发布应用程序Publish application

  5. 从发布通知中复制应用 ID 供以后使用Copy the App ID from the publish notification for later use

  6. 复制语音资源密钥供以后使用Copy the Speech Resource Key for later use

步骤 2:创建 Visual Studio 项目Step 2: Create a Visual Studio project

若要创建 Visual Studio 项目以用于通用 Windows 平台 (UWP) 开发,需要设置 Visual Studio 开发选项,创建项目,选择目标体系结构,设置音频捕获,然后安装语音 SDK。To create a Visual Studio project for Universal Windows Platform (UWP) development, you need to set up Visual Studio development options, create the project, select the target architecture, set up audio capture, and install the Speech SDK.

设置 Visual Studio 开发选项Set up Visual Studio development options

若要开始,请确保已在 Visual Studio 中正确设置以用于 UWP 开发:To start, make sure you're set up correctly in Visual Studio for UWP development:

  1. 打开 Visual Studio 2019 以显示“开始” 窗口。Open Visual Studio 2019 to display the Start window.

    显示“开始”窗口的屏幕截图,其中突出显示了“继续但无需代码”操作。

  2. 选择“继续但无需代码” ,转到 Visual Studio IDE。Select Continue without code to go to the Visual Studio IDE.

  3. 在 Visual Studio 菜单栏中,选择“工具” > “获取工具和功能” 以打开 Visual Studio 安装程序并查看“修改” 对话框。From the Visual Studio menu bar, select Tools > Get Tools and Features to open Visual Studio Installer and view the Modifying dialog box.

    显示“修改”对话框的“工作负荷”选项卡的屏幕截图,其中突出显示了“通用 Windows 平台开发”。

  4. 在“工作负载” 选项卡的“Windows” 下,找到“通用 Windows 平台开发” 工作负载。In the Workloads tab, under Windows, find the Universal Windows Platform development workload. 如果已选中该工作负载旁的复选框,请关闭“修改” 对话框,然后转到步骤 6。If the check box next to that workload is already selected, close the Modifying dialog box, and go to step 6.

  5. 选中“通用 Windows 平台开发” 复选框,选择“修改” ,然后在“在开始之前” 对话框中,选择“继续” 以安装 UWP 开发工作负载。Select the Universal Windows Platform development check box, select Modify, and then in the Before we get started dialog box, select Continue to install the UWP development workload. 安装新功能可能花费一些时间。Installation of the new feature may take a while.

  6. 关闭 Visual Studio 安装程序。Close Visual Studio Installer.

创建项目并选择目标体系结构Create the project and select the target architecture

接下来,创建项目:Next, create your project:

  1. 在 Visual Studio 菜单栏中,选择“文件” > “新建” > “项目” 以显示“创建新项目” 窗口。In the Visual Studio menu bar, choose File > New > Project to display the Create a new project window.

    显示“新建项目”窗口的屏幕截图,其中选中了“空白应用(通用 Windows)”,并且突出显示了“下一步”按钮。

  2. 查找并选择“空白应用(通用 Windows)”。Find and select Blank App (Universal Windows). 确保选择此项目类型的 C# 版本(而不是 Visual Basic)。Make sure that you select the C# version of this project type (as opposed to Visual Basic).

  3. 选择“下一步” 以显示“配置新项目” 屏幕。Select Next to display the Configure your new project screen.

    显示“配置新项目”屏幕的屏幕截图,其中突出显示了“项目名称”和“位置”字段以及“创建”按钮。

  4. 在“项目名称” 中输入 helloworldIn Project name, enter helloworld.

  5. 在“位置” 中,导航到并选择或创建用于保存项目的文件夹。In Location, navigate to and select or create the folder to save your project in.

  6. 选择“创建” 以转到“新建通用 Windows 平台项目” 窗口。Select Create to go to the New Universal Windows Platform Project window.

    显示“新建通用 Windows 平台项目”对话框的屏幕截图。

  7. 在“最低版本” (第二个下拉框)中,选择“Windows 10 Fall Creators Update (10.0;内部版本 16299)” ,这是语音 SDK 的最低要求。In Minimum version (the second drop-down box), choose Windows 10 Fall Creators Update (10.0; Build 16299), which is the minimum requirement for the Speech SDK.

  8. 在“目标版本” (第一个下拉框)中,选择与“最低版本” 中的值相等或更高的值。In Target version (the first drop-down box), choose a value identical to or later than the value in Minimum version.

  9. 选择“确定” 。Select OK. 返回到 Visual Studio IDE,其中新项目已创建并显示在“解决方案资源管理器 ”窗格中。You're returned to the Visual Studio IDE, with the new project created and visible in the Solution Explorer pane.

    helloworld 项目 - Visual Studio

现在选择目标平台体系结构。Now select your target platform architecture. 在 Visual Studio 工具栏中,找到“解决方案平台” 下拉框。In the Visual Studio toolbar, find the Solution Platforms drop-down box. (如果找不到,请选择“查看” > “工具栏” > “标准” 以显示包含“解决方案平台”的工具栏 。)如果运行的是 64 位 Windows,请在下拉框中选择“x64” 。(If you don't see it, choose View > Toolbars > Standard to display the toolbar containing Solution Platforms.) If you're running 64-bit Windows, choose x64 in the drop-down box. 64 位 Windows 也可以运行 32 位应用程序,因此可以根据自己的偏好选择“x86” 。64-bit Windows can also run 32-bit applications, so you can choose x86 if you prefer.

备注

语音 SDK 支持所有与 Intel 兼容的处理器,但仅支持“x64”版的 ARM 处理器。The Speech SDK supports all Intel-compatible processors, but only x64 versions of ARM processors.

设置音频捕获Set up audio capture

允许项目捕获音频输入:Allow the project to capture audio input:

  1. 在“解决方案资源管理器”中,双击“Package.appxmanifest”,以打开包应用程序清单。In Solution Explorer, double-click Package.appxmanifest to open the package application manifest.

  2. 选择“功能”选项卡。Select the Capabilities tab.

    “功能”选项卡,包应用程序清单 - Visual Studio

  3. 选中“麦克风”功能对应的框。Select the box for the Microphone capability.

  4. 在菜单栏中,选择“文件” > “保存 Package.appxmanifest”以保存所做的更改。From the menu bar, choose File > Save Package.appxmanifest to save your changes.

安装语音 SDKInstall the Speech SDK

最后,安装语音 SDK NuGet 包,并在项目中引用语音 SDK:Finally, install the Speech SDK NuGet package, and reference the Speech SDK in your project:

  1. 在“解决方案资源管理器”中,右键单击你的解决方案,然后选择“管理解决方案的 NuGet 包”以转到“NuGet - 解决方案”窗口。In Solution Explorer, right-click your solution, and choose Manage NuGet Packages for Solution to go to the NuGet - Solution window.

  2. 选择“浏览”。Select Browse.

    显示“管理解决方案的包”对话框的屏幕截图,其中突出显示了“浏览”选项卡、“搜索”框和“包源”。

  3. 在“包源”中,选择“nuget.org”。In Package source, choose nuget.org.

  4. 在“搜索”框中,输入 Microsoft.CognitiveServices.Speech,然后在该包显示在搜索结果中之后选择该包。In the Search box, enter Microsoft.CognitiveServices.Speech, and then choose that package after it appears in the search results.

    显示选中“Microsoft.CognitiveServices.Speech”的屏幕截图,其中突出显示了项目和“安装”按钮。

  5. 在搜索结果旁的“包状态”窗格中,选择“helloworld”项目。In the package status pane next to the search results, select your helloworld project.

  6. 选择“安装” 。Select Install.

  7. 在“预览更改” 对话框中,选择“确定” 。In the Preview Changes dialog box, select OK.

  8. 在“接受许可证” 对话框中,查看许可证,然后选择“我接受” 。In the License Acceptance dialog box, view the license, and then select I Accept. 包安装开始,安装完成后,“输出” 窗格将显示类似于以下文本的消息:Successfully installed 'Microsoft.CognitiveServices.Speech 1.15.0' to helloworldThe package installation begins, and when installation is complete, the Output pane displays a message similar to the following text: Successfully installed 'Microsoft.CognitiveServices.Speech 1.15.0' to helloworld.

步骤 3:添加示例代码Step 3: Add sample code

在此步骤中,添加定义应用程序用户界面的 XAML 代码,并添加 C# 代码隐藏实现。In this step, we add the XAML code that defines the user interface of the application, and add the C# code-behind implementation.

XAML 代码XAML code

通过添加 XAML 代码创建应用程序的用户界面。Create the application's user interface by adding the XAML code.

  1. 在“解决方案资源管理器”中打开 MainPage.xamlIn Solution Explorer, open MainPage.xaml

  2. 在设计器的 XAML 视图中,将整个内容替换为以下代码片段:In the designer's XAML view, replace the entire contents with the following code snippet:

    <Page
        x:Class="helloworld.MainPage"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:local="using:helloworld"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        mc:Ignorable="d"
        Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    
        <Grid>
            <StackPanel Orientation="Vertical" HorizontalAlignment="Center"
                        Margin="20,50,0,0" VerticalAlignment="Center" Width="800">
                <Button x:Name="EnableMicrophoneButton" Content="Enable Microphone"
                        Margin="0,10,10,0" Click="EnableMicrophone_ButtonClicked"
                        Height="35"/>
                <Button x:Name="ListenButton" Content="Talk"
                        Margin="0,10,10,0" Click="ListenButton_ButtonClicked"
                        Height="35"/>
                <StackPanel x:Name="StatusPanel" Orientation="Vertical"
                            RelativePanel.AlignBottomWithPanel="True"
                            RelativePanel.AlignRightWithPanel="True"
                            RelativePanel.AlignLeftWithPanel="True">
                    <TextBlock x:Name="StatusLabel" Margin="0,10,10,0"
                               TextWrapping="Wrap" Text="Status:" FontSize="20"/>
                    <Border x:Name="StatusBorder" Margin="0,0,0,0">
                        <ScrollViewer VerticalScrollMode="Auto"
                                      VerticalScrollBarVisibility="Auto" MaxHeight="200">
                            <!-- Use LiveSetting to enable screen readers to announce
                                 the status update. -->
                            <TextBlock
                                x:Name="StatusBlock" FontWeight="Bold"
                                AutomationProperties.LiveSetting="Assertive"
                                MaxWidth="{Binding ElementName=Splitter, Path=ActualWidth}"
                                Margin="10,10,10,20" TextWrapping="Wrap"  />
                        </ScrollViewer>
                    </Border>
                </StackPanel>
            </StackPanel>
            <MediaElement x:Name="mediaElement"/>
        </Grid>
    </Page>
    

更新“设计”视图以显示应用程序的用户界面。The Design view is updated to show the application's user interface.

C# 代码隐藏源C# code-behind source

添加代码隐藏源,以便应用程序按预期工作。Add the code-behind source so that the application works as expected. 代码隐藏源包括:The code-behind source includes:

  • SpeechSpeech.Dialog 命名空间的所需 using 语句Required using statements for the Speech and Speech.Dialog namespaces
  • 一个绑定到按钮处理程序的简单实现,用于确保麦克风访问A simple implementation to ensure microphone access, wired to a button handler
  • 基本的 UI 帮助程序,用于在应用程序中提供消息和错误Basic UI helpers to present messages and errors in the application
  • 初始化代码路径的登陆点,稍后将填充A landing point for the initialization code path that will be populated later
  • 用于播放文本转语音的帮助程序(没有流式处理支持)A helper to play back text-to-speech (without streaming support)
  • 一个用于启动侦听的空的按钮处理程序,稍后将填充An empty button handler to start listening that will be populated later

按如下所示添加代码隐藏源:Add the code-behind source as follows:

  1. 在“解决方案资源管理器”中,打开代码隐藏源文件 MainPage.xaml.cs(归入 MainPage.xamlIn Solution Explorer, open the code-behind source file MainPage.xaml.cs (grouped under MainPage.xaml)

  2. 将该文件的内容替换为以下代码:Replace the file's contents with the following code:

    using Microsoft.CognitiveServices.Speech;
    using Microsoft.CognitiveServices.Speech.Audio;
    using Microsoft.CognitiveServices.Speech.Dialog;
    using System;
    using System.IO;
    using System.Text;
    using Windows.UI.Xaml;
    using Windows.UI.Xaml.Controls;
    using Windows.UI.Xaml.Media;
    
    namespace helloworld
    {
        public sealed partial class MainPage : Page
        {
            private DialogServiceConnector connector;
    
            private enum NotifyType
            {
                StatusMessage,
                ErrorMessage
            };
    
            public MainPage()
            {
                this.InitializeComponent();
            }
    
            private async void EnableMicrophone_ButtonClicked(
                object sender, RoutedEventArgs e)
            {
                bool isMicAvailable = true;
                try
                {
                    var mediaCapture = new Windows.Media.Capture.MediaCapture();
                    var settings =
                        new Windows.Media.Capture.MediaCaptureInitializationSettings();
                    settings.StreamingCaptureMode =
                        Windows.Media.Capture.StreamingCaptureMode.Audio;
                    await mediaCapture.InitializeAsync(settings);
                }
                catch (Exception)
                {
                    isMicAvailable = false;
                }
                if (!isMicAvailable)
                {
                    await Windows.System.Launcher.LaunchUriAsync(
                        new Uri("ms-settings:privacy-microphone"));
                }
                else
                {
                    NotifyUser("Microphone was enabled", NotifyType.StatusMessage);
                }
            }
    
            private void NotifyUser(
                string strMessage, NotifyType type = NotifyType.StatusMessage)
            {
                // If called from the UI thread, then update immediately.
                // Otherwise, schedule a task on the UI thread to perform the update.
                if (Dispatcher.HasThreadAccess)
                {
                    UpdateStatus(strMessage, type);
                }
                else
                {
                    var task = Dispatcher.RunAsync(
                        Windows.UI.Core.CoreDispatcherPriority.Normal,
                        () => UpdateStatus(strMessage, type));
                }
            }
    
            private void UpdateStatus(string strMessage, NotifyType type)
            {
                switch (type)
                {
                    case NotifyType.StatusMessage:
                        StatusBorder.Background = new SolidColorBrush(
                            Windows.UI.Colors.Green);
                        break;
                    case NotifyType.ErrorMessage:
                        StatusBorder.Background = new SolidColorBrush(
                            Windows.UI.Colors.Red);
                        break;
                }
                StatusBlock.Text += string.IsNullOrEmpty(StatusBlock.Text)
                    ? strMessage : "\n" + strMessage;
    
                if (!string.IsNullOrEmpty(StatusBlock.Text))
                {
                    StatusBorder.Visibility = Visibility.Visible;
                    StatusPanel.Visibility = Visibility.Visible;
                }
                else
                {
                    StatusBorder.Visibility = Visibility.Collapsed;
                    StatusPanel.Visibility = Visibility.Collapsed;
                }
                // Raise an event if necessary to enable a screen reader
                // to announce the status update.
                var peer = Windows.UI.Xaml.Automation.Peers.FrameworkElementAutomationPeer.FromElement(StatusBlock);
                if (peer != null)
                {
                    peer.RaiseAutomationEvent(
                        Windows.UI.Xaml.Automation.Peers.AutomationEvents.LiveRegionChanged);
                }
            }
    
            // Waits for and accumulates all audio associated with a given
            // PullAudioOutputStream and then plays it to the MediaElement. Long spoken
            // audio will create extra latency and a streaming playback solution
            // (that plays audio while it continues to be received) should be used --
            // see the samples for examples of this.
            private void SynchronouslyPlayActivityAudio(
                PullAudioOutputStream activityAudio)
            {
                var playbackStreamWithHeader = new MemoryStream();
                playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("RIFF"), 0, 4); // ChunkID
                playbackStreamWithHeader.Write(BitConverter.GetBytes(UInt32.MaxValue), 0, 4); // ChunkSize: max
                playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("WAVE"), 0, 4); // Format
                playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("fmt "), 0, 4); // Subchunk1ID
                playbackStreamWithHeader.Write(BitConverter.GetBytes(16), 0, 4); // Subchunk1Size: PCM
                playbackStreamWithHeader.Write(BitConverter.GetBytes(1), 0, 2); // AudioFormat: PCM
                playbackStreamWithHeader.Write(BitConverter.GetBytes(1), 0, 2); // NumChannels: mono
                playbackStreamWithHeader.Write(BitConverter.GetBytes(16000), 0, 4); // SampleRate: 16kHz
                playbackStreamWithHeader.Write(BitConverter.GetBytes(32000), 0, 4); // ByteRate
                playbackStreamWithHeader.Write(BitConverter.GetBytes(2), 0, 2); // BlockAlign
                playbackStreamWithHeader.Write(BitConverter.GetBytes(16), 0, 2); // BitsPerSample: 16-bit
                playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("data"), 0, 4); // Subchunk2ID
                playbackStreamWithHeader.Write(BitConverter.GetBytes(UInt32.MaxValue), 0, 4); // Subchunk2Size
    
                byte[] pullBuffer = new byte[2056];
    
                uint lastRead = 0;
                do
                {
                    lastRead = activityAudio.Read(pullBuffer);
                    playbackStreamWithHeader.Write(pullBuffer, 0, (int)lastRead);
                }
                while (lastRead == pullBuffer.Length);
    
                var task = Dispatcher.RunAsync(
                    Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
                {
                    mediaElement.SetSource(
                        playbackStreamWithHeader.AsRandomAccessStream(), "audio/wav");
                    mediaElement.Play();
                });
            }
    
            private void InitializeDialogServiceConnector()
            {
                // New code will go here
            }
    
            private async void ListenButton_ButtonClicked(
                object sender, RoutedEventArgs e)
            {
                // New code will go here
            }
        }
    }
    

    备注

    如果看到错误:“类型‘对象’在未被引用的程序集中定义”If you see error: "The type 'Object' is defined in an assembly that is not referenced"

    1. 右键单击解决方案。Right-client your solution.
    2. 选择“管理解决方案的 NuGet 包”,然后选择“更新” Choose Manage NuGet Packages for Solution, Select Updates
    3. 如果在更新列表中看到 Microsoft.NETCore.UniversalWindowsPlatform,将 Microsoft.NETCore.UniversalWindowsPlatform 更新为最新版本 If you see Microsoft.NETCore.UniversalWindowsPlatform in the update list, Update Microsoft.NETCore.UniversalWindowsPlatform to newest version
  3. 将以下代码添加到 InitializeDialogServiceConnector 的方法主体Add the following code to the method body of InitializeDialogServiceConnector

    // This code creates the `DialogServiceConnector` with your subscription information.
    // create a DialogServiceConfig by providing a Custom Commands application id and Cognitive Services subscription key
    // the RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview
    const string speechCommandsApplicationId = "YourApplicationId"; // Your application id
    const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your subscription key
    const string region = "YourServiceRegion"; // The subscription service region. Note: only 'westus2' is currently supported
    
    var speechCommandsConfig = CustomCommandsConfig.FromSubscription(speechCommandsApplicationId, speechSubscriptionKey, region);
    speechCommandsConfig.SetProperty(PropertyId.SpeechServiceConnection_RecoLanguage, "en-us");
    connector = new DialogServiceConnector(speechCommandsConfig);
    
  4. 将字符串 YourApplicationIdYourSpeechSubscriptionKeyYourServiceRegion 分别替换为自己的应用、语音订阅和区域Replace the strings YourApplicationId, YourSpeechSubscriptionKey, and YourServiceRegion with your own values for your app, speech subscription, and region

  5. 将以下代码片段追加到 InitializeDialogServiceConnector 的方法主体的末尾Append the following code snippet to the end of the method body of InitializeDialogServiceConnector

    //
    // This code sets up handlers for events relied on by `DialogServiceConnector` to communicate its activities,
    // speech recognition results, and other information.
    //
    // ActivityReceived is the main way your client will receive messages, audio, and events
    connector.ActivityReceived += (sender, activityReceivedEventArgs) =>
    {
        NotifyUser(
            $"Activity received, hasAudio={activityReceivedEventArgs.HasAudio} activity={activityReceivedEventArgs.Activity}");
    
        if (activityReceivedEventArgs.HasAudio)
        {
            SynchronouslyPlayActivityAudio(activityReceivedEventArgs.Audio);
        }
    };
    
    // Canceled will be signaled when a turn is aborted or experiences an error condition
    connector.Canceled += (sender, canceledEventArgs) =>
    {
        NotifyUser($"Canceled, reason={canceledEventArgs.Reason}");
        if (canceledEventArgs.Reason == CancellationReason.Error)
        {
            NotifyUser(
                $"Error: code={canceledEventArgs.ErrorCode}, details={canceledEventArgs.ErrorDetails}");
        }
    };
    
    // Recognizing (not 'Recognized') will provide the intermediate recognized text
    // while an audio stream is being processed
    connector.Recognizing += (sender, recognitionEventArgs) =>
    {
        NotifyUser($"Recognizing! in-progress text={recognitionEventArgs.Result.Text}");
    };
    
    // Recognized (not 'Recognizing') will provide the final recognized text
    // once audio capture is completed
    connector.Recognized += (sender, recognitionEventArgs) =>
    {
        NotifyUser($"Final speech-to-text result: '{recognitionEventArgs.Result.Text}'");
    };
    
    // SessionStarted will notify when audio begins flowing to the service for a turn
    connector.SessionStarted += (sender, sessionEventArgs) =>
    {
        NotifyUser($"Now Listening! Session started, id={sessionEventArgs.SessionId}");
    };
    
    // SessionStopped will notify when a turn is complete and
    // it's safe to begin listening again
    connector.SessionStopped += (sender, sessionEventArgs) =>
    {
        NotifyUser($"Listening complete. Session ended, id={sessionEventArgs.SessionId}");
    };
    
  6. 将以下代码片段添加到 MainPage 类中 ListenButton_ButtonClicked 方法的正文Add the following code snippet to the body of the ListenButton_ButtonClicked method in the MainPage class

    // This code sets up `DialogServiceConnector` to listen, since you already established the configuration and
    // registered the event handlers.
    if (connector == null)
    {
        InitializeDialogServiceConnector();
        // Optional step to speed up first interaction: if not called,
        // connection happens automatically on first use
        var connectTask = connector.ConnectAsync();
    }
    
    try
    {
        // Start sending audio
        await connector.ListenOnceAsync();
    }
    catch (Exception ex)
    {
        NotifyUser($"Exception: {ex.ToString()}", NotifyType.ErrorMessage);
    }
    
  7. 在菜单栏中,选择“文件” > “全部保存”以保存所做的更改From the menu bar, choose File > Save All to save your changes

试试看Try it out

  1. 从菜单栏中,选择“构建” > “构建解决方案”以构建应用程序。From the menu bar, choose Build > Build Solution to build the application. 编译代码时应不会出错。The code should compile without errors.

  2. 选择“调试” > “开始调试”(或按 F5)以启动应用程序。Choose Debug > Start Debugging (or press F5) to start the application. 此时将显示“helloworld”窗口。The helloworld window appears.

    C# 中的示例 UWP 虚拟助手应用程序 - 快速入门

  3. 选择“启用麦克风”。Select Enable Microphone. 如果弹出访问权限请求,请选择“是”。If the access permission request pops up, select Yes.

    麦克风访问权限请求

  4. 选择“讲话”,然后对着设备的麦克风讲出一个英文短语或句子。Select Talk, and speak an English phrase or sentence into your device's microphone. 你的语音将传输到 Direct Line 语音通道并转录为文本,该文本会显示在窗口中。Your speech is transmitted to the Direct Line Speech channel and transcribed to text, which appears in the window.

后续步骤Next steps