您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

意向识别入门Get started with intent recognition

在此快速入门中,你将使用语音 SDK和语言理解 (LUIS) 服务来识别从麦克风获取的音频数据中的意向。In this quickstart, you'll use the Speech SDK and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. 具体来说,你将使用语音 SDK 来捕获语音,并使用 LUIS 中的预构建域来识别主自动化的意向,比如打开和关闭电灯。Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.

满足几个先决条件后,通过麦克风识别语音和确定意向只需几个步骤:After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:

  • 通过订阅密钥和区域创建 SpeechConfig 对象。Create a SpeechConfig object from your subscription key and region.
  • 使用以上的 SpeechConfig 对象创建 IntentRecognizer 对象。Create an IntentRecognizer object using the SpeechConfig object from above.
  • 使用 IntentRecognizer 对象,开始单个言语的识别过程。Using the IntentRecognizer object, start the recognition process for a single utterance.
  • 检查返回的 IntentRecognitionResultInspect the IntentRecognitionResult returned.

可以在 GitHub 上查看或下载所有语音 SDK C# 示例You can view or download all Speech SDK C# Samples on GitHub.

先决条件Prerequisites

准备工作:Before you get started:

创建 LUIS 应用以进行意向识别Create a LUIS app for intent recognition

若要完成意向识别快速入门,需要使用 LUIS 预览门户创建 LUIS 帐户和项目。To complete the intent recognition quickstart, you'll need to create a LUIS account and a project using the LUIS preview portal. 本快速入门仅需 LUIS 订阅。This quickstart only requires a LUIS subscription. 无需语音服务订阅 。A Speech service subscription isn't required.

你需要做的第一件事是使用 LUIS 预览门户创建 LUIS 帐户和应用。The first thing you'll need to do is create a LUIS account and app using the LUIS preview portal. 创建的 LUIS 应用会将预生成域用于主自动化,提供意向、实体和示例言语。The LUIS app that you create will use a prebuilt domain for home automation, which provides intents, entities, and example utterances. 完成本教程后,你会有一个在云中运行的 LUIS 终结点,可使用语音 SDK 进行调用。When you're finished, you'll have a LUIS endpoint running in the cloud that you can call using the Speech SDK.

可以按照这些说明创建 LUIS 应用:Follow these instructions to create your LUIS app:

完成后,需要以下四项信息:When you're done, you'll need four things:

  • 在打开“语音启动” 的情况下重新发布Re-publish with Speech priming toggled on
  • 你的 LUIS 主密钥Your LUIS Primary key
  • 你的 LUIS 位置Your LUIS Location
  • 你的 LUIS 应用 IDYour LUIS App ID

可以在 LUIS 预览门户中从以下位置找到此信息:Here's where you can find this information in the LUIS preview portal:

  1. 在LUIS 预览门户中,选择你的应用,然后选择“发布” 按钮。From the LUIS preview portal, select your app then select the Publish button.

  2. 选择“生产” 槽,如果使用的是 en-US,请将“语音启动” 选项切换到“开” 位置。Select the Production slot, if you're using en-US toggle the Speech priming option to the On position. 然后选择“发布” 按钮。Then select the Publish button.

    重要

    强烈建议使用“语音启动” ,因为它会提高语音识别的准确性。Speech priming is highly recommended as it will improve speech recognition accuracy.

    将 LUIS 发布到终结点Publish LUIS to endpoint

  3. 在 LUIS 预览版门户中,选择“管理”,然后选择“Azure 资源” 。From the LUIS preview portal, select Manage, then select Azure Resources. 在此页上,可以找到你的 LUIS 密钥和位置(有时也称为“区域”) 。On this page, you'll find your LUIS key and location (sometimes referred to as region).

    LUIS 密钥和位置LUIS key and location

  4. 获取密钥和位置后,需要应用 ID。After you've got your key and location, you'll need the app ID. 选择“应用程序设置”-- 你的应用 ID 在此页上提供 。Select Application Settings -- your app ID is available on this page.

    LUIS 应用 IDLUIS app ID

在 Visual Studio 中打开项目Open your project in Visual Studio

接下来,在 Visual Studio 中打开项目。Next, open your project in Visual Studio.

  1. 启动 Visual Studio 2019。Launch Visual Studio 2019.
  2. 加载项目并打开 Program.csLoad your project and open Program.cs.

从一些样本代码入手Start with some boilerplate code

添加一些代码作为项目的框架。Let's add some code that works as a skeleton for our project. 请注意,已创建名为 RecognizeIntentAsync() 的异步方法。Make note that you've created an async method called RecognizeIntentAsync().

using System;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Intent;

namespace helloworld
{
    class Program
    {
        public static async Task RecognizeIntentAsync()
        {
        }

        static void Main()
        {
            RecognizeIntentAsync().Wait();
            Console.WriteLine("Please press <Return> to continue.");
            Console.ReadLine();
        }
    }
}

创建语音配置Create a Speech configuration

需要创建一个使用 LUIS 预测资源的密钥和位置的配置,才能初始化 IntentRecognizer 对象。Before you can initialize an IntentRecognizer object, you need to create a configuration that uses the key and location for your LUIS prediction resource.

重要

起始密钥和创作密钥将不起作用。Your starter key and authoring keys will not work. 必须使用之前创建的预测密钥和位置。You must use your prediction key and location that you created earlier. 有关详细信息,请参阅创建 LUIS 应用以进行意向识别For more information, see Create a LUIS app for intent recognition.

将此代码插入 RecognizeIntentAsync() 方法。Insert this code in the RecognizeIntentAsync() method. 请确保更新以下值:Make sure you update these values:

  • "YourLanguageUnderstandingSubscriptionKey" 替换为 LUIS 预测密钥。Replace "YourLanguageUnderstandingSubscriptionKey" with your LUIS prediction key.
  • "YourLanguageUnderstandingServiceRegion" 替换为 LUIS 位置。Replace "YourLanguageUnderstandingServiceRegion" with your LUIS location. 使用区域中的“区域标识符”。Use Region identifier from region.

提示

如果需要帮助查找这些值,请参阅创建 LUIS 应用以进行意向识别If you need help finding these values, see Create a LUIS app for intent recognition.

var config = SpeechConfig.FromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

此示例使用 FromSubscription() 方法来生成 SpeechConfigThis sample uses the FromSubscription() method to build the SpeechConfig. 有关可用方法的完整列表,请参阅 SpeechConfig 类For a full list of available methods, see SpeechConfig Class.

语音 SDK 将默认使用 en-us 作为语言进行识别。若要了解如何选择源语言,请参阅指定语音转文本的源语言The Speech SDK will default to recognizing using en-us for the language, see Specify source language for speech to text for information on choosing the source language.

初始化 IntentRecognizerInitialize an IntentRecognizer

现在,让我们创建 IntentRecognizerNow, let's create an IntentRecognizer. 此对象是在 using 语句中创建的,以确保正确释放非托管资源。This object is created inside of a using statement to ensure the proper release of unmanaged resources. 将此代码插入语音配置下的 RecognizeIntentAsync() 方法。Insert this code in the RecognizeIntentAsync() method, right below your Speech configuration.

using (var recognizer = new IntentRecognizer(config))
{
}

添加 LanguageUnderstandingModel 和意向Add a LanguageUnderstandingModel and intents

需要将 LanguageUnderstandingModel 与意向识别器相关联,并添加要识别的意向。You need to associate a LanguageUnderstandingModel with the intent recognizer, and add the intents that you want recognized. 我们将使用预生成的域中的意向进行主自动化。We're going to use intents from the prebuilt domain for home automation. 将此代码插入到上一部分中的 using 语句。Insert this code in the using statement from the previous section. 请确保将 "YourLanguageUnderstandingAppId" 替换为 LUIS 应用 ID。Make sure that you replace "YourLanguageUnderstandingAppId" with your LUIS app ID.

提示

如果需要查找此值的帮助,请参阅创建 LUIS 应用以进行意向识别If you need help finding this value, see Create a LUIS app for intent recognition.

var model = LanguageUnderstandingModel.FromAppId("YourLanguageUnderstandingAppId");
recognizer.AddIntent(model, "HomeAutomation.TurnOn");
recognizer.AddIntent(model, "HomeAutomation.TurnOff");

此示例使用 AddIntent() 函数单独添加意向。This example uses the AddIntent() function to individually add intents. 如果要从模型中添加所有意向,请使用 AddAllIntents(model) 并传递模型。If you want to add all intents from a model, use AddAllIntents(model) and pass the model.

识别意向Recognize an intent

IntentRecognizer 对象中,我们将调用 RecognizeOnceAsync() 方法。From the IntentRecognizer object, you're going to call the RecognizeOnceAsync() method. 此方法是告知语音服务你要发送单个需识别的短语,在确定该短语后会停止识别语音。This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.

在 using 语句中,将此代码添加到你的模型下。Inside the using statement, add this code below your model.

var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);

显示识别结果(或错误)Display recognition results (or errors)

语音服务返回识别结果后,将需要对其进行处理。When the recognition result is returned by the Speech service, you'll want to do something with it. 我们会简单地将结果输出到控制台。We're going to keep it simple and print the results to console.

在 using 语句中的 RecognizeOnceAsync() 下方,添加以下代码:Inside the using statement, below RecognizeOnceAsync(), add this code:

if (result.Reason == ResultReason.RecognizedIntent)
{
    Console.WriteLine($"RECOGNIZED: Text={result.Text}");
    Console.WriteLine($"    Intent Id: {result.IntentId}.");
    Console.WriteLine($"    Language Understanding JSON: {result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)}.");
}
else if (result.Reason == ResultReason.RecognizedSpeech)
{
    Console.WriteLine($"RECOGNIZED: Text={result.Text}");
    Console.WriteLine($"    Intent not recognized.");
}
else if (result.Reason == ResultReason.NoMatch)
{
    Console.WriteLine($"NOMATCH: Speech could not be recognized.");
}
else if (result.Reason == ResultReason.Canceled)
{
    var cancellation = CancellationDetails.FromResult(result);
    Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");

    if (cancellation.Reason == CancellationReason.Error)
    {
        Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
        Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
        Console.WriteLine($"CANCELED: Did you update the subscription info?");
    }
}

查看代码Check your code

此时,代码应如下所示:At this point, your code should look like this:

备注

我们已向此版本添加了一些注释。We've added some comments to this version.

using System;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Intent;

namespace helloworld
{
    class Program
    {
        public static async Task RecognizeIntentAsync()
        {
            // Creates an instance of a speech config with specified subscription key
            // and service region. Note that in contrast to other services supported by
            // the Cognitive Services Speech SDK, the Language Understanding service
            // requires a specific subscription key from https://www.luis.ai/.
            // The Language Understanding service calls the required key 'endpoint key'.
            // Once you've obtained it, replace with below with your own Language Understanding subscription key
            // and service region (e.g., "westus").
            // The default language is "en-us".
            var config = SpeechConfig.FromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

            // Creates an intent recognizer using microphone as audio input.
            using (var recognizer = new IntentRecognizer(config))
            {
                // Creates a Language Understanding model using the app id, and adds
                // specific intents from your home automation model.
                var model = LanguageUnderstandingModel.FromAppId("YourLanguageUnderstandingAppId");
                recognizer.AddIntent(model, "HomeAutomation.TurnOn");
                recognizer.AddIntent(model, "HomeAutomation.TurnOff");

                // Starts recognizing.
                Console.WriteLine("Say something...");

                // Starts intent recognition, and returns after a single utterance is recognized. The end of a
                // single utterance is determined by listening for silence at the end or until a maximum of 15
                // seconds of audio is processed.  The task returns the recognition text as result.
                // Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
                // shot recognition like command or query.
                // For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
                var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);

                // Checks result.
                if (result.Reason == ResultReason.RecognizedIntent)
                {
                    Console.WriteLine($"RECOGNIZED: Text={result.Text}");
                    Console.WriteLine($"    Intent Id: {result.IntentId}.");
                    Console.WriteLine($"    Language Understanding JSON: {result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)}.");
                }
                else if (result.Reason == ResultReason.RecognizedSpeech)
                {
                    Console.WriteLine($"RECOGNIZED: Text={result.Text}");
                    Console.WriteLine($"    Intent not recognized.");
                }
                else if (result.Reason == ResultReason.NoMatch)
                {
                    Console.WriteLine($"NOMATCH: Speech could not be recognized.");
                }
                else if (result.Reason == ResultReason.Canceled)
                {
                    var cancellation = CancellationDetails.FromResult(result);
                    Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");

                    if (cancellation.Reason == CancellationReason.Error)
                    {
                        Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                        Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                        Console.WriteLine($"CANCELED: Did you update the subscription info?");
                    }
                }
            }
        }

        static void Main()
        {
            RecognizeIntentAsync().Wait();
            Console.WriteLine("Please press <Return> to continue.");
            Console.ReadLine();
        }
    }
}

生成并运行应用Build and run your app

现在,可以使用语音服务构建应用并测试语音识别。Now you're ready to build your app and test our speech recognition using the Speech service.

  1. “编译代码”- 在 Visual Studio 菜单栏中,选择“生成” > “生成解决方案” 。Compile the code - From the menu bar of Visual Studio, choose Build > Build Solution.
  2. 启动应用 - 在菜单栏中,选择“调试” > “开始调试”,或按 F5 。Start your app - From the menu bar, choose Debug > Start Debugging or press F5.
  3. 开始识别 - 它将提示你说英语短语。Start recognition - It'll prompt you to speak a phrase in English. 语音将发送到语音服务,转录为文本,并在控制台中呈现。Your speech is sent to the Speech service, transcribed as text, and rendered in the console.

后续步骤Next steps

在此快速入门中,你将使用语音 SDK和语言理解 (LUIS) 服务来识别从麦克风获取的音频数据中的意向。In this quickstart, you'll use the Speech SDK and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. 具体来说,你将使用语音 SDK 来捕获语音,并使用 LUIS 中的预构建域来识别主自动化的意向,比如打开和关闭电灯。Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.

满足几个先决条件后,通过麦克风识别语音和确定意向只需几个步骤:After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:

  • 通过订阅密钥和区域创建 SpeechConfig 对象。Create a SpeechConfig object from your subscription key and region.
  • 使用以上的 SpeechConfig 对象创建 IntentRecognizer 对象。Create an IntentRecognizer object using the SpeechConfig object from above.
  • 使用 IntentRecognizer 对象,开始单个言语的识别过程。Using the IntentRecognizer object, start the recognition process for a single utterance.
  • 检查返回的 IntentRecognitionResultInspect the IntentRecognitionResult returned.

可以在 GitHub 上查看或下载所有语音 SDK C++ 示例You can view or download all Speech SDK C++ Samples on GitHub.

先决条件Prerequisites

准备工作:Before you get started:

创建 LUIS 应用以进行意向识别Create a LUIS app for intent recognition

若要完成意向识别快速入门,需要使用 LUIS 预览门户创建 LUIS 帐户和项目。To complete the intent recognition quickstart, you'll need to create a LUIS account and a project using the LUIS preview portal. 本快速入门仅需 LUIS 订阅。This quickstart only requires a LUIS subscription. 无需语音服务订阅 。A Speech service subscription isn't required.

你需要做的第一件事是使用 LUIS 预览门户创建 LUIS 帐户和应用。The first thing you'll need to do is create a LUIS account and app using the LUIS preview portal. 创建的 LUIS 应用会将预生成域用于主自动化,提供意向、实体和示例言语。The LUIS app that you create will use a prebuilt domain for home automation, which provides intents, entities, and example utterances. 完成本教程后,你会有一个在云中运行的 LUIS 终结点,可使用语音 SDK 进行调用。When you're finished, you'll have a LUIS endpoint running in the cloud that you can call using the Speech SDK.

可以按照这些说明创建 LUIS 应用:Follow these instructions to create your LUIS app:

完成后,需要以下四项信息:When you're done, you'll need four things:

  • 在打开“语音启动” 的情况下重新发布Re-publish with Speech priming toggled on
  • 你的 LUIS 主密钥Your LUIS Primary key
  • 你的 LUIS 位置Your LUIS Location
  • 你的 LUIS 应用 IDYour LUIS App ID

可以在 LUIS 预览门户中从以下位置找到此信息:Here's where you can find this information in the LUIS preview portal:

  1. 在LUIS 预览门户中,选择你的应用,然后选择“发布” 按钮。From the LUIS preview portal, select your app then select the Publish button.

  2. 选择“生产” 槽,如果使用的是 en-US,请将“语音启动” 选项切换到“开” 位置。Select the Production slot, if you're using en-US toggle the Speech priming option to the On position. 然后选择“发布” 按钮。Then select the Publish button.

    重要

    强烈建议使用“语音启动” ,因为它会提高语音识别的准确性。Speech priming is highly recommended as it will improve speech recognition accuracy.

    将 LUIS 发布到终结点Publish LUIS to endpoint

  3. 在 LUIS 预览版门户中,选择“管理”,然后选择“Azure 资源” 。From the LUIS preview portal, select Manage, then select Azure Resources. 在此页上,可以找到你的 LUIS 密钥和位置(有时也称为“区域”) 。On this page, you'll find your LUIS key and location (sometimes referred to as region).

    LUIS 密钥和位置LUIS key and location

  4. 获取密钥和位置后,需要应用 ID。After you've got your key and location, you'll need the app ID. 选择“应用程序设置”-- 你的应用 ID 在此页上提供 。Select Application Settings -- your app ID is available on this page.

    LUIS 应用 IDLUIS app ID

在 Visual Studio 中打开项目Open your project in Visual Studio

接下来,在 Visual Studio 中打开项目。Next, open your project in Visual Studio.

  1. 启动 Visual Studio 2019。Launch Visual Studio 2019.
  2. 加载项目并打开 helloworld.cppLoad your project and open helloworld.cpp.

从一些样本代码入手Start with some boilerplate code

添加一些代码作为项目的框架。Let's add some code that works as a skeleton for our project. 请注意,已创建名为 recognizeIntent() 的异步方法。Make note that you've created an async method called recognizeIntent().

#include "stdafx.h"
// <code>
#include <iostream>
#include <speechapi_cxx.h>

using namespace std;
using namespace Microsoft::CognitiveServices::Speech;
using namespace Microsoft::CognitiveServices::Speech::Intent;

void recognizeIntent()
{
}

int wmain()
{
    recognizeIntent();
    cout << "Please press a key to continue.\n";
    cin.get();
    return 0;
}

创建语音配置Create a Speech configuration

需要创建一个使用 LUIS 预测资源的密钥和位置的配置,才能初始化 IntentRecognizer 对象。Before you can initialize an IntentRecognizer object, you need to create a configuration that uses the key and location for your LUIS prediction resource.

重要

起始密钥和创作密钥将不起作用。Your starter key and authoring keys will not work. 必须使用之前创建的预测密钥和位置。You must use your prediction key and location that you created earlier. 有关详细信息,请参阅创建 LUIS 应用以进行意向识别For more information, see Create a LUIS app for intent recognition.

将此代码插入 recognizeIntent() 方法。Insert this code in the recognizeIntent() method. 请确保更新以下值:Make sure you update these values:

  • "YourLanguageUnderstandingSubscriptionKey" 替换为 LUIS 预测密钥。Replace "YourLanguageUnderstandingSubscriptionKey" with your LUIS prediction key.
  • "YourLanguageUnderstandingServiceRegion" 替换为 LUIS 位置。Replace "YourLanguageUnderstandingServiceRegion" with your LUIS location. 使用区域中的“区域标识符”。Use Region identifier from region.

提示

如果需要帮助查找这些值,请参阅创建 LUIS 应用以进行意向识别If you need help finding these values, see Create a LUIS app for intent recognition.

auto config = SpeechConfig::FromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

此示例使用 FromSubscription() 方法来生成 SpeechConfigThis sample uses the FromSubscription() method to build the SpeechConfig. 有关可用方法的完整列表,请参阅 SpeechConfig 类For a full list of available methods, see SpeechConfig Class.

语音 SDK 将默认使用 en-us 作为语言进行识别。若要了解如何选择源语言,请参阅指定语音转文本的源语言The Speech SDK will default to recognizing using en-us for the language, see Specify source language for speech to text for information on choosing the source language.

初始化 IntentRecognizerInitialize an IntentRecognizer

现在,让我们创建 IntentRecognizerNow, let's create an IntentRecognizer. 将此代码插入语音配置下的 recognizeIntent() 方法。Insert this code in the recognizeIntent() method, right below your Speech configuration.

auto recognizer = IntentRecognizer::FromConfig(config);

添加 LanguageUnderstandingModel 和意向Add a LanguageUnderstandingModel and Intents

需要将 LanguageUnderstandingModel 与意向识别器相关联,并添加要识别的意向。You need to associate a LanguageUnderstandingModel with the intent recognizer, and add the intents you want recognized. 我们将使用预生成的域中的意向进行主自动化。We're going to use intents from the prebuilt domain for home automation.

将此代码插入到你的 IntentRecognizer 下方。Insert this code below your IntentRecognizer. 请确保将 "YourLanguageUnderstandingAppId" 替换为 LUIS 应用 ID。Make sure that you replace "YourLanguageUnderstandingAppId" with your LUIS app ID.

提示

如果需要查找此值的帮助,请参阅创建 LUIS 应用以进行意向识别If you need help finding this value, see Create a LUIS app for intent recognition.

auto model = LanguageUnderstandingModel::FromAppId("YourLanguageUnderstandingAppId");
recognizer->AddIntent(model, "HomeAutomation.TurnOn");
recognizer->AddIntent(model, "HomeAutomation.TurnOff");

此示例使用 AddIntent() 函数单独添加意向。This example uses the AddIntent() function to individually add intents. 如果要从模型中添加所有意向,请使用 AddAllIntents(model) 并传递模型。If you want to add all intents from a model, use AddAllIntents(model) and pass the model.

识别意向Recognize an intent

IntentRecognizer 对象中,我们将调用 RecognizeOnceAsync() 方法。From the IntentRecognizer object, you're going to call the RecognizeOnceAsync() method. 此方法是告知语音服务你要发送单个需识别的短语,在确定该短语后会停止识别语音。This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech. 此为简写内容,今后将回头补充。For simplicity we'll wait on the future returned to complete.

将此代码插入到你的模型下方:Insert this code below your model:

auto result = recognizer->RecognizeOnceAsync().get();

显示识别结果(或错误)Display the recognition results (or errors)

语音服务返回识别结果后,将需要对其进行处理。When the recognition result is returned by the Speech service, you'll want to do something with it. 我们会简单地将结果输出到控制台。We're going to keep it simple and print the result to console.

将此代码插在 auto result = recognizer->RecognizeOnceAsync().get(); 下:Insert this code below auto result = recognizer->RecognizeOnceAsync().get();:

if (result->Reason == ResultReason::RecognizedIntent)
{
    cout << "RECOGNIZED: Text=" << result->Text << std::endl;
    cout << "  Intent Id: " << result->IntentId << std::endl;
    cout << "  Intent Service JSON: " << result->Properties.GetProperty(PropertyId::LanguageUnderstandingServiceResponse_JsonResult) << std::endl;
}
else if (result->Reason == ResultReason::RecognizedSpeech)
{
    cout << "RECOGNIZED: Text=" << result->Text << " (intent could not be recognized)" << std::endl;
}
else if (result->Reason == ResultReason::NoMatch)
{
    cout << "NOMATCH: Speech could not be recognized." << std::endl;
}
else if (result->Reason == ResultReason::Canceled)
{
    auto cancellation = CancellationDetails::FromResult(result);
    cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;

    if (cancellation->Reason == CancellationReason::Error)
    {
        cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
        cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
        cout << "CANCELED: Did you update the subscription info?" << std::endl;
    }
}

查看代码Check your code

此时,代码应如下所示:At this point, your code should look like this:

备注

我们已向此版本添加了一些注释。We've added some comments to this version.

#include "stdafx.h"
// <code>
#include <iostream>
#include <speechapi_cxx.h>

using namespace std;
using namespace Microsoft::CognitiveServices::Speech;
using namespace Microsoft::CognitiveServices::Speech::Intent;

void recognizeIntent()
{
    // Creates an instance of a speech config with specified subscription key
    // and service region. Note that in contrast to other services supported by
    // the Cognitive Services Speech SDK, the Language Understanding service
    // requires a specific subscription key from https://www.luis.ai/.
    // The Language Understanding service calls the required key 'endpoint key'.
    // Once you've obtained it, replace with below with your own Language Understanding subscription key
    // and service region (e.g., "westus").
    // The default recognition language is "en-us".
    auto config = SpeechConfig::FromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

    // Creates an intent recognizer using microphone as audio input.
    auto recognizer = IntentRecognizer::FromConfig(config);

    // Creates a Language Understanding model using the app id, and adds specific intents from your model
    auto model = LanguageUnderstandingModel::FromAppId("YourLanguageUnderstandingAppId");
    recognizer->AddIntent(model, "HomeAutomation.TurnOn");
    recognizer->AddIntent(model, "HomeAutomation.TurnOff");

    cout << "Say something...\n";

    // Starts intent recognition, and returns after a single utterance is recognized. The end of a
    // single utterance is determined by listening for silence at the end or until a maximum of 15
    // seconds of audio is processed.  The task returns the recognition text as result.
    // Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
    // shot recognition like command or query.
    // For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
    auto result = recognizer->RecognizeOnceAsync().get();

    // Checks result.
    if (result->Reason == ResultReason::RecognizedIntent)
    {
        cout << "RECOGNIZED: Text=" << result->Text << std::endl;
        cout << "  Intent Id: " << result->IntentId << std::endl;
        cout << "  Intent Service JSON: " << result->Properties.GetProperty(PropertyId::LanguageUnderstandingServiceResponse_JsonResult) << std::endl;
    }
    else if (result->Reason == ResultReason::RecognizedSpeech)
    {
        cout << "RECOGNIZED: Text=" << result->Text << " (intent could not be recognized)" << std::endl;
    }
    else if (result->Reason == ResultReason::NoMatch)
    {
        cout << "NOMATCH: Speech could not be recognized." << std::endl;
    }
    else if (result->Reason == ResultReason::Canceled)
    {
        auto cancellation = CancellationDetails::FromResult(result);
        cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;

        if (cancellation->Reason == CancellationReason::Error)
        {
            cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
            cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
            cout << "CANCELED: Did you update the subscription info?" << std::endl;
        }
    }
}

int wmain()
{
    recognizeIntent();
    cout << "Please press a key to continue.\n";
    cin.get();
    return 0;

生成并运行应用Build and run your app

现在,可以使用语音服务构建应用并测试语音识别。Now you're ready to build your app and test our speech recognition using the Speech service.

  1. “编译代码”- 在 Visual Studio 菜单栏中,选择“生成” > “生成解决方案” 。Compile the code - From the menu bar of Visual Studio, choose Build > Build Solution.
  2. 启动应用 - 在菜单栏中,选择“调试” > “开始调试”,或按 F5 。Start your app - From the menu bar, choose Debug > Start Debugging or press F5.
  3. 开始识别 - 它将提示你说英语短语。Start recognition - It'll prompt you to speak a phrase in English. 语音将发送到语音服务,转录为文本,并在控制台中呈现。Your speech is sent to the Speech service, transcribed as text, and rendered in the console.

后续步骤Next steps


在此快速入门中,你将使用语音 SDK和语言理解 (LUIS) 服务来识别从麦克风获取的音频数据中的意向。In this quickstart, you'll use the Speech SDK and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. 具体来说,你将使用语音 SDK 来捕获语音,并使用 LUIS 中的预构建域来识别主自动化的意向,比如打开和关闭电灯。Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.

满足几个先决条件后,通过麦克风识别语音和确定意向只需几个步骤:After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:

  • 通过订阅密钥和区域创建 SpeechConfig 对象。Create a SpeechConfig object from your subscription key and region.
  • 使用以上的 SpeechConfig 对象创建 IntentRecognizer 对象。Create an IntentRecognizer object using the SpeechConfig object from above.
  • 使用 IntentRecognizer 对象,开始单个言语的识别过程。Using the IntentRecognizer object, start the recognition process for a single utterance.
  • 检查返回的 IntentRecognitionResultInspect the IntentRecognitionResult returned.

可以在 GitHub 上查看或下载所有语音 SDK Java 示例You can view or download all Speech SDK Java Samples on GitHub.

先决条件Prerequisites

准备工作:Before you get started:

创建 LUIS 应用以进行意向识别Create a LUIS app for intent recognition

若要完成意向识别快速入门,需要使用 LUIS 预览门户创建 LUIS 帐户和项目。To complete the intent recognition quickstart, you'll need to create a LUIS account and a project using the LUIS preview portal. 本快速入门仅需 LUIS 订阅。This quickstart only requires a LUIS subscription. 无需语音服务订阅 。A Speech service subscription isn't required.

你需要做的第一件事是使用 LUIS 预览门户创建 LUIS 帐户和应用。The first thing you'll need to do is create a LUIS account and app using the LUIS preview portal. 创建的 LUIS 应用会将预生成域用于主自动化,提供意向、实体和示例言语。The LUIS app that you create will use a prebuilt domain for home automation, which provides intents, entities, and example utterances. 完成本教程后,你会有一个在云中运行的 LUIS 终结点,可使用语音 SDK 进行调用。When you're finished, you'll have a LUIS endpoint running in the cloud that you can call using the Speech SDK.

可以按照这些说明创建 LUIS 应用:Follow these instructions to create your LUIS app:

完成后,需要以下四项信息:When you're done, you'll need four things:

  • 在打开“语音启动” 的情况下重新发布Re-publish with Speech priming toggled on
  • 你的 LUIS 主密钥Your LUIS Primary key
  • 你的 LUIS 位置Your LUIS Location
  • 你的 LUIS 应用 IDYour LUIS App ID

可以在 LUIS 预览门户中从以下位置找到此信息:Here's where you can find this information in the LUIS preview portal:

  1. 在LUIS 预览门户中,选择你的应用,然后选择“发布” 按钮。From the LUIS preview portal, select your app then select the Publish button.

  2. 选择“生产” 槽,如果使用的是 en-US,请将“语音启动” 选项切换到“开” 位置。Select the Production slot, if you're using en-US toggle the Speech priming option to the On position. 然后选择“发布” 按钮。Then select the Publish button.

    重要

    强烈建议使用“语音启动” ,因为它会提高语音识别的准确性。Speech priming is highly recommended as it will improve speech recognition accuracy.

    将 LUIS 发布到终结点Publish LUIS to endpoint

  3. 在 LUIS 预览版门户中,选择“管理”,然后选择“Azure 资源” 。From the LUIS preview portal, select Manage, then select Azure Resources. 在此页上,可以找到你的 LUIS 密钥和位置(有时也称为“区域”) 。On this page, you'll find your LUIS key and location (sometimes referred to as region).

    LUIS 密钥和位置LUIS key and location

  4. 获取密钥和位置后,需要应用 ID。After you've got your key and location, you'll need the app ID. 选择“应用程序设置”-- 你的应用 ID 在此页上提供 。Select Application Settings -- your app ID is available on this page.

    LUIS 应用 IDLUIS app ID

打开项目Open your project

  1. 打开首选 IDE。Open your preferred IDE.
  2. 加载项目并打开 Main.javaLoad your project and open Main.java.

从一些样本代码入手Start with some boilerplate code

添加一些代码作为项目的框架。Let's add some code that works as a skeleton for our project.

package speechsdk.quickstart;

import com.microsoft.cognitiveservices.speech.*;
import com.microsoft.cognitiveservices.speech.intent.*;

/**
 * Quickstart: recognize speech using the Speech SDK for Java.
 */
public class Main {

    /**
     * @param args Arguments are ignored in this sample.
     */
    public static void main(String[] args) {
        try {
        } catch (Exception ex) {
            System.out.println("Unexpected exception: " + ex.getMessage());

            assert(false);
            System.exit(1);
        }
    }
}

创建语音配置Create a Speech configuration

需要创建一个使用 LUIS 预测资源的密钥和位置的配置,才能初始化 IntentRecognizer 对象。Before you can initialize an IntentRecognizer object, you need to create a configuration that uses the key and location for your LUIS prediction resource.

将此代码插入到 main() 中的 try/catch 块中。Insert this code in the try / catch block in main(). 请确保更新以下值:Make sure you update these values:

  • "YourLanguageUnderstandingSubscriptionKey" 替换为 LUIS 预测密钥。Replace "YourLanguageUnderstandingSubscriptionKey" with your LUIS prediction key.
  • "YourLanguageUnderstandingServiceRegion" 替换为 LUIS 位置。Replace "YourLanguageUnderstandingServiceRegion" with your LUIS location. 使用区域中的“区域标识符”Use Region identifier from region

提示

如果需要帮助查找这些值,请参阅创建 LUIS 应用以进行意向识别If you need help finding these values, see Create a LUIS app for intent recognition.

SpeechConfig config = SpeechConfig.fromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

此示例使用 FromSubscription() 方法来生成 SpeechConfigThis sample uses the FromSubscription() method to build the SpeechConfig. 有关可用方法的完整列表,请参阅 SpeechConfig 类For a full list of available methods, see SpeechConfig Class.

语音 SDK 将默认使用 en-us 作为语言进行识别。若要了解如何选择源语言,请参阅指定语音转文本的源语言The Speech SDK will default to recognizing using en-us for the language, see Specify source language for speech to text for information on choosing the source language.

初始化 IntentRecognizerInitialize an IntentRecognizer

现在,让我们创建 IntentRecognizerNow, let's create an IntentRecognizer. 将此代码插入语音配置下。Insert this code right below your Speech configuration.

IntentRecognizer recognizer = new IntentRecognizer(config);

添加 LanguageUnderstandingModel 和意向Add a LanguageUnderstandingModel and Intents

需要将 LanguageUnderstandingModel 与意向识别器相关联,并添加要识别的意向。You need to associate a LanguageUnderstandingModel with the intent recognizer, and add the intents you want recognized. 我们将使用预生成的域中的意向进行主自动化。We're going to use intents from the prebuilt domain for home automation.

将此代码插入到你的 IntentRecognizer 下方。Insert this code below your IntentRecognizer. 请确保将 "YourLanguageUnderstandingAppId" 替换为 LUIS 应用 ID。Make sure that you replace "YourLanguageUnderstandingAppId" with your LUIS app ID.

提示

如果需要查找此值的帮助,请参阅创建 LUIS 应用以进行意向识别If you need help finding this value, see Create a LUIS app for intent recognition.

LanguageUnderstandingModel model = LanguageUnderstandingModel.fromAppId("YourLanguageUnderstandingAppId");
recognizer.addIntent(model, "HomeAutomation.TurnOn");
recognizer.addIntent(model, "HomeAutomation.TurnOff");

此示例使用 addIntent() 函数单独添加意向。This example uses the addIntent() function to individually add intents. 如果要从模型中添加所有意向,请使用 addAllIntents(model) 并传递模型。If you want to add all intents from a model, use addAllIntents(model) and pass the model.

识别意向Recognize an intent

IntentRecognizer 对象中,我们将调用 recognizeOnceAsync() 方法。From the IntentRecognizer object, you're going to call the recognizeOnceAsync() method. 此方法是告知语音服务你要发送单个需识别的短语,在确定该短语后会停止识别语音。This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.

将此代码插入到你的模型下方:Insert this code below your model:

IntentRecognitionResult result = recognizer.recognizeOnceAsync().get();

显示识别结果(或错误)Display the recognition results (or errors)

语音服务返回识别结果后,将需要对其进行处理。When the recognition result is returned by the Speech service, you'll want to do something with it. 我们会简单地将结果输出到控制台。We're going to keep it simple and print the result to console.

将此代码插入到对 recognizeOnceAsync() 的调用下方。Insert this code below your call to recognizeOnceAsync().

if (result.getReason() == ResultReason.RecognizedIntent) {
    System.out.println("RECOGNIZED: Text=" + result.getText());
    System.out.println("    Intent Id: " + result.getIntentId());
    System.out.println("    Intent Service JSON: " + result.getProperties().getProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult));
}
else if (result.getReason() == ResultReason.RecognizedSpeech) {
    System.out.println("RECOGNIZED: Text=" + result.getText());
    System.out.println("    Intent not recognized.");
}
else if (result.getReason() == ResultReason.NoMatch) {
    System.out.println("NOMATCH: Speech could not be recognized.");
}
else if (result.getReason() == ResultReason.Canceled) {
    CancellationDetails cancellation = CancellationDetails.fromResult(result);
    System.out.println("CANCELED: Reason=" + cancellation.getReason());

    if (cancellation.getReason() == CancellationReason.Error) {
        System.out.println("CANCELED: ErrorCode=" + cancellation.getErrorCode());
        System.out.println("CANCELED: ErrorDetails=" + cancellation.getErrorDetails());
        System.out.println("CANCELED: Did you update the subscription info?");
    }
}

释放资源Release Resources

使用完语音资源后,必须将其释放。It's important to release the speech resources when you're done using them. 将此代码插入到 try/catch 块的末尾:Insert this code at the end of the try / catch block:

result.close();
recognizer.close();

查看代码Check your code

此时,代码应如下所示:At this point, your code should look like this:

备注

我们已向此版本添加了一些注释。We've added some comments to this version.

package speechsdk.quickstart;

import com.microsoft.cognitiveservices.speech.*;
import com.microsoft.cognitiveservices.speech.intent.*;

/**
 * Quickstart: recognize speech using the Speech SDK for Java.
 */
public class Main {

    /**
     * @param args Arguments are ignored in this sample.
     */
    public static void main(String[] args) {
        try {
            // <IntentRecognitionWithMicrophone>
            // Creates an instance of a speech config with specified
            // subscription key (called 'endpoint key' by the Language Understanding service)
            // and service region. Replace with your own subscription (endpoint) key
            // and service region (e.g., "westus2").
            // The default language is "en-us".
            SpeechConfig config = SpeechConfig.fromSubscription("YourLanguageUnderstandingSubscriptionKey", "YourLanguageUnderstandingServiceRegion");

            // Creates an intent recognizer using microphone as audio input.
            IntentRecognizer recognizer = new IntentRecognizer(config);

            // Creates a language understanding model using the app id, and adds specific intents from your model
            LanguageUnderstandingModel model = LanguageUnderstandingModel.fromAppId("YourLanguageUnderstandingAppId");
            recognizer.addIntent(model, "HomeAutomation.TurnOn");
            recognizer.addIntent(model, "HomeAutomation.TurnOff");

            System.out.println("Say something...");

            // Starts recognition. It returns when the first utterance has been recognized.
            IntentRecognitionResult result = recognizer.recognizeOnceAsync().get();

            // Checks result.
            if (result.getReason() == ResultReason.RecognizedIntent) {
                System.out.println("RECOGNIZED: Text=" + result.getText());
                System.out.println("    Intent Id: " + result.getIntentId());
                System.out.println("    Intent Service JSON: " + result.getProperties().getProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult));
            }
            else if (result.getReason() == ResultReason.RecognizedSpeech) {
                System.out.println("RECOGNIZED: Text=" + result.getText());
                System.out.println("    Intent not recognized.");
            }
            else if (result.getReason() == ResultReason.NoMatch) {
                System.out.println("NOMATCH: Speech could not be recognized.");
            }
            else if (result.getReason() == ResultReason.Canceled) {
                CancellationDetails cancellation = CancellationDetails.fromResult(result);
                System.out.println("CANCELED: Reason=" + cancellation.getReason());

                if (cancellation.getReason() == CancellationReason.Error) {
                    System.out.println("CANCELED: ErrorCode=" + cancellation.getErrorCode());
                    System.out.println("CANCELED: ErrorDetails=" + cancellation.getErrorDetails());
                    System.out.println("CANCELED: Did you update the subscription info?");
                }
            }

            result.close();
            recognizer.close();
        } catch (Exception ex) {
            System.out.println("Unexpected exception: " + ex.getMessage());

            assert(false);
            System.exit(1);
        }
    }
}

生成并运行应用Build and run your app

F11,或选择“运行” > “调试” 。Press F11, or select Run > Debug. 接下来的 15 秒,通过麦克风提供的语音输入将被识别并记录到控制台窗口中。The next 15 seconds of speech input from your microphone will be recognized and logged in the console window.

后续步骤Next steps

在此快速入门中,你将使用语音 SDK和语言理解 (LUIS) 服务来识别从麦克风获取的音频数据中的意向。In this quickstart, you'll use the Speech SDK and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. 具体来说,你将使用语音 SDK 来捕获语音,并使用 LUIS 中的预构建域来识别主自动化的意向,比如打开和关闭电灯。Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.

满足几个先决条件后,通过麦克风识别语音和确定意向只需几个步骤:After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:

  • 通过订阅密钥和区域创建 SpeechConfig 对象。Create a SpeechConfig object from your subscription key and region.
  • 使用以上的 SpeechConfig 对象创建 IntentRecognizer 对象。Create an IntentRecognizer object using the SpeechConfig object from above.
  • 使用 IntentRecognizer 对象,开始单个言语的识别过程。Using the IntentRecognizer object, start the recognition process for a single utterance.
  • 检查返回的 IntentRecognitionResultInspect the IntentRecognitionResult returned.

可以在 GitHub 上查看或下载所有语音 SDK Python 示例You can view or download all Speech SDK Python Samples on GitHub.

先决条件Prerequisites

准备工作:Before you get started:

创建 LUIS 应用以进行意向识别Create a LUIS app for intent recognition

若要完成意向识别快速入门,需要使用 LUIS 预览门户创建 LUIS 帐户和项目。To complete the intent recognition quickstart, you'll need to create a LUIS account and a project using the LUIS preview portal. 本快速入门仅需 LUIS 订阅。This quickstart only requires a LUIS subscription. 无需语音服务订阅 。A Speech service subscription isn't required.

你需要做的第一件事是使用 LUIS 预览门户创建 LUIS 帐户和应用。The first thing you'll need to do is create a LUIS account and app using the LUIS preview portal. 创建的 LUIS 应用会将预生成域用于主自动化,提供意向、实体和示例言语。The LUIS app that you create will use a prebuilt domain for home automation, which provides intents, entities, and example utterances. 完成本教程后,你会有一个在云中运行的 LUIS 终结点,可使用语音 SDK 进行调用。When you're finished, you'll have a LUIS endpoint running in the cloud that you can call using the Speech SDK.

可以按照这些说明创建 LUIS 应用:Follow these instructions to create your LUIS app:

完成后,需要以下四项信息:When you're done, you'll need four things:

  • 在打开“语音启动” 的情况下重新发布Re-publish with Speech priming toggled on
  • 你的 LUIS 主密钥Your LUIS Primary key
  • 你的 LUIS 位置Your LUIS Location
  • 你的 LUIS 应用 IDYour LUIS App ID

可以在 LUIS 预览门户中从以下位置找到此信息:Here's where you can find this information in the LUIS preview portal:

  1. 在LUIS 预览门户中,选择你的应用,然后选择“发布” 按钮。From the LUIS preview portal, select your app then select the Publish button.

  2. 选择“生产” 槽,如果使用的是 en-US,请将“语音启动” 选项切换到“开” 位置。Select the Production slot, if you're using en-US toggle the Speech priming option to the On position. 然后选择“发布” 按钮。Then select the Publish button.

    重要

    强烈建议使用“语音启动” ,因为它会提高语音识别的准确性。Speech priming is highly recommended as it will improve speech recognition accuracy.

    将 LUIS 发布到终结点Publish LUIS to endpoint

  3. 在 LUIS 预览版门户中,选择“管理”,然后选择“Azure 资源” 。From the LUIS preview portal, select Manage, then select Azure Resources. 在此页上,可以找到你的 LUIS 密钥和位置(有时也称为“区域”) 。On this page, you'll find your LUIS key and location (sometimes referred to as region).

    LUIS 密钥和位置LUIS key and location

  4. 获取密钥和位置后,需要应用 ID。After you've got your key and location, you'll need the app ID. 选择“应用程序设置”-- 你的应用 ID 在此页上提供 。Select Application Settings -- your app ID is available on this page.

    LUIS 应用 IDLUIS app ID

打开项目Open your project

  1. 打开首选 IDE。Open your preferred IDE.
  2. 创建新项目,并创建名为 quickstart.py 的文件,然后将其打开。Create a new project and create file called quickstart.py, then open it.

从一些样本代码入手Start with some boilerplate code

添加一些代码作为项目的框架。Let's add some code that works as a skeleton for our project.

import azure.cognitiveservices.speech as speechsdk

print("Say something...")

创建语音配置Create a Speech configuration

需要创建一个使用 LUIS 预测资源的密钥和位置的配置,才能初始化 IntentRecognizer 对象。Before you can initialize an IntentRecognizer object, you need to create a configuration that uses the key and location for your LUIS prediction resource.

将此代码插入 quickstart.pyInsert this code in quickstart.py. 请确保更新以下值:Make sure you update these values:

  • "YourLanguageUnderstandingSubscriptionKey" 替换为 LUIS 预测密钥。Replace "YourLanguageUnderstandingSubscriptionKey" with your LUIS prediction key.
  • "YourLanguageUnderstandingServiceRegion" 替换为 LUIS 位置。Replace "YourLanguageUnderstandingServiceRegion" with your LUIS location. 使用区域中的“区域标识符”Use Region identifier from region

提示

如果需要帮助查找这些值,请参阅创建 LUIS 应用以进行意向识别If you need help finding these values, see Create a LUIS app for intent recognition.

intent_config = speechsdk.SpeechConfig(subscription="YourLanguageUnderstandingSubscriptionKey", region="YourLanguageUnderstandingServiceRegion")

此示例使用 LUIS 密钥和区域构造 SpeechConfig 对象。This sample constructs the SpeechConfig object using LUIS key and region. 有关可用方法的完整列表,请参阅 SpeechConfig 类For a full list of available methods, see SpeechConfig Class.

语音 SDK 将默认使用 en-us 作为语言进行识别。若要了解如何选择源语言,请参阅指定语音转文本的源语言The Speech SDK will default to recognizing using en-us for the language, see Specify source language for speech to text for information on choosing the source language.

初始化 IntentRecognizerInitialize an IntentRecognizer

现在,让我们创建 IntentRecognizerNow, let's create an IntentRecognizer. 将此代码插入语音配置下。Insert this code right below your Speech configuration.

intent_recognizer = speechsdk.intent.IntentRecognizer(speech_config=intent_config)

添加 LanguageUnderstandingModel 和意向Add a LanguageUnderstandingModel and Intents

需要将 LanguageUnderstandingModel 与意向识别器相关联,并添加要识别的意向。You need to associate a LanguageUnderstandingModel with the intent recognizer and add the intents you want recognized. 我们将使用预生成的域中的意向进行主自动化。We're going to use intents from the prebuilt domain for home automation.

将此代码插入到你的 IntentRecognizer 下方。Insert this code below your IntentRecognizer. 请确保将 "YourLanguageUnderstandingAppId" 替换为 LUIS 应用 ID。Make sure that you replace "YourLanguageUnderstandingAppId" with your LUIS app ID.

提示

如果需要查找此值的帮助,请参阅创建 LUIS 应用以进行意向识别If you need help finding this value, see Create a LUIS app for intent recognition.

model = speechsdk.intent.LanguageUnderstandingModel(app_id="YourLanguageUnderstandingAppId")
intents = [
    (model, "HomeAutomation.TurnOn"),
    (model, "HomeAutomation.TurnOff")
]
intent_recognizer.add_intents(intents)

# Starts intent recognition, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15

此示例使用 add_intents() 函数添加显式定义的意向的列表。This example uses the add_intents() function to add a list of explicitly-defined intents. 如果要从模型中添加所有意向,请使用 add_all_intents(model) 并传递模型。If you want to add all intents from a model, use add_all_intents(model) and pass the model.

识别意向Recognize an intent

IntentRecognizer 对象中,我们将调用 recognize_once() 方法。From the IntentRecognizer object, you're going to call the recognize_once() method. 此方法是告知语音服务你要发送单个需识别的短语,在确定该短语后会停止识别语音。This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.

将此代码插入到你的模型下方。Insert this code below your model.

if intent_result.reason == speechsdk.ResultReason.RecognizedIntent:

显示识别结果(或错误)Display the recognition results (or errors)

语音服务返回识别结果后,将需要对其进行处理。When the recognition result is returned by the Speech service, you'll want to do something with it. 我们会简单地将结果输出到控制台。We're going to keep it simple and print the result to console.

在对 recognize_once() 的调用下方,添加以下代码。Below your call to recognize_once(), add this code.

    print("Recognized: {}".format(intent_result.text))
elif intent_result.reason == speechsdk.ResultReason.NoMatch:
    print("No speech could be recognized: {}".format(intent_result.no_match_details))
elif intent_result.reason == speechsdk.ResultReason.Canceled:
    print("Intent recognition canceled: {}".format(intent_result.cancellation_details.reason))
    if intent_result.cancellation_details.reason == speechsdk.CancellationReason.Error:
        print("Error details: {}".format(intent_result.cancellation_details.error_details))
# </IntentRecognitionOnceWithMic>

查看代码Check your code

此时,代码应如下所示。At this point, your code should look like this.

备注

我们已向此版本添加了一些注释。We've added some comments to this version.

import azure.cognitiveservices.speech as speechsdk

print("Say something...")

"""performs one-shot intent recognition from input from the default microphone"""
# <IntentRecognitionOnceWithMic>
# Set up the config for the intent recognizer (remember that this uses the Language Understanding key, not the Speech Services key)!
intent_config = speechsdk.SpeechConfig(subscription="YourLanguageUnderstandingSubscriptionKey", region="YourLanguageUnderstandingServiceRegion")

# Set up the intent recognizer
intent_recognizer = speechsdk.intent.IntentRecognizer(speech_config=intent_config)

# set up the intents that are to be recognized. These can be a mix of simple phrases and
# intents specified through a LanguageUnderstanding Model.
model = speechsdk.intent.LanguageUnderstandingModel(app_id="YourLanguageUnderstandingAppId")
intents = [
    (model, "HomeAutomation.TurnOn"),
    (model, "HomeAutomation.TurnOff")
]
intent_recognizer.add_intents(intents)

# Starts intent recognition, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. It returns the recognition text as result.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
intent_result = intent_recognizer.recognize_once()

# Check the results
if intent_result.reason == speechsdk.ResultReason.RecognizedIntent:
    print("Recognized: \"{}\" with intent id `{}`".format(intent_result.text, intent_result.intent_id))
elif intent_result.reason == speechsdk.ResultReason.RecognizedSpeech:
    print("Recognized: {}".format(intent_result.text))
elif intent_result.reason == speechsdk.ResultReason.NoMatch:
    print("No speech could be recognized: {}".format(intent_result.no_match_details))
elif intent_result.reason == speechsdk.ResultReason.Canceled:
    print("Intent recognition canceled: {}".format(intent_result.cancellation_details.reason))
    if intent_result.cancellation_details.reason == speechsdk.CancellationReason.Error:
        print("Error details: {}".format(intent_result.cancellation_details.error_details))
# </IntentRecognitionOnceWithMic>

生成并运行应用Build and run your app

通过控制台或 IDE 运行示例:Run the sample from the console or in your IDE:

python quickstart.py

接下来的 15 秒,通过麦克风提供的语音输入将被识别并记录到控制台窗口中。The next 15 seconds of speech input from your microphone will be recognized and logged in the console window.

后续步骤Next steps

在此快速入门中,你将使用语音 SDK和语言理解 (LUIS) 服务来识别从麦克风获取的音频数据中的意向。In this quickstart, you'll use the Speech SDK and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. 具体来说,你将使用语音 SDK 来捕获语音,并使用 LUIS 中的预构建域来识别主自动化的意向,比如打开和关闭电灯。Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.

满足几个先决条件后,通过麦克风识别语音和确定意向只需几个步骤:After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:

  • 通过订阅密钥和区域创建 SpeechConfig 对象。Create a SpeechConfig object from your subscription key and region.
  • 使用以上的 SpeechConfig 对象创建 IntentRecognizer 对象。Create an IntentRecognizer object using the SpeechConfig object from above.
  • 使用 IntentRecognizer 对象,开始单个言语的识别过程。Using the IntentRecognizer object, start the recognition process for a single utterance.
  • 检查返回的 IntentRecognitionResultInspect the IntentRecognitionResult returned.

可以在 GitHub 上查看或下载所有语音 SDK JavaScript 示例You can view or download all Speech SDK JavaScript Samples on GitHub.

先决条件Prerequisites

准备工作:Before you get started:

创建 LUIS 应用以进行意向识别Create a LUIS app for intent recognition

若要完成意向识别快速入门,需要使用 LUIS 预览门户创建 LUIS 帐户和项目。To complete the intent recognition quickstart, you'll need to create a LUIS account and a project using the LUIS preview portal. 本快速入门仅需 LUIS 订阅。This quickstart only requires a LUIS subscription. 无需语音服务订阅 。A Speech service subscription isn't required.

你需要做的第一件事是使用 LUIS 预览门户创建 LUIS 帐户和应用。The first thing you'll need to do is create a LUIS account and app using the LUIS preview portal. 创建的 LUIS 应用会将预生成域用于主自动化,提供意向、实体和示例言语。The LUIS app that you create will use a prebuilt domain for home automation, which provides intents, entities, and example utterances. 完成本教程后,你会有一个在云中运行的 LUIS 终结点,可使用语音 SDK 进行调用。When you're finished, you'll have a LUIS endpoint running in the cloud that you can call using the Speech SDK.

可以按照这些说明创建 LUIS 应用:Follow these instructions to create your LUIS app:

完成后,需要以下四项信息:When you're done, you'll need four things:

  • 在打开“语音启动” 的情况下重新发布Re-publish with Speech priming toggled on
  • 你的 LUIS 主密钥Your LUIS Primary key
  • 你的 LUIS 位置Your LUIS Location
  • 你的 LUIS 应用 IDYour LUIS App ID

可以在 LUIS 预览门户中从以下位置找到此信息:Here's where you can find this information in the LUIS preview portal:

  1. 在LUIS 预览门户中,选择你的应用,然后选择“发布” 按钮。From the LUIS preview portal, select your app then select the Publish button.

  2. 选择“生产” 槽,如果使用的是 en-US,请将“语音启动” 选项切换到“开” 位置。Select the Production slot, if you're using en-US toggle the Speech priming option to the On position. 然后选择“发布” 按钮。Then select the Publish button.

    重要

    强烈建议使用“语音启动” ,因为它会提高语音识别的准确性。Speech priming is highly recommended as it will improve speech recognition accuracy.

    将 LUIS 发布到终结点Publish LUIS to endpoint

  3. 在 LUIS 预览版门户中,选择“管理”,然后选择“Azure 资源” 。From the LUIS preview portal, select Manage, then select Azure Resources. 在此页上,可以找到你的 LUIS 密钥和位置(有时也称为“区域”) 。On this page, you'll find your LUIS key and location (sometimes referred to as region).

    LUIS 密钥和位置LUIS key and location

  4. 获取密钥和位置后,需要应用 ID。After you've got your key and location, you'll need the app ID. 选择“应用程序设置”-- 你的应用 ID 在此页上提供 。Select Application Settings -- your app ID is available on this page.

    LUIS 应用 IDLUIS app ID

从一些样本代码入手Start with some boilerplate code

添加一些代码作为项目的框架。Let's add some code that works as a skeleton for our project.

    <!DOCTYPE html>
    <html>
    <head>
    <title>Microsoft Cognitive Services Speech SDK JavaScript Quickstart</title>
    <meta charset="utf-8" />
    </head>
    <body style="font-family:'Helvetica Neue',Helvetica,Arial,sans-serif; font-size:13px;">
    </body>
    </html>

添加 UI 元素Add UI Elements

现在,我们将为输入框添加一些基本 UI,引用语音 SDK 的 JavaScript,并获取授权令牌(如果有)。Now we'll add some basic UI for input boxes, reference the Speech SDK's JavaScript, and grab an authorization token if available.

<body style="font-family:'Helvetica Neue',Helvetica,Arial,sans-serif; font-size:13px;">
  <div id="content" style="display:none">
    <table width="100%">
      <tr>
        <td></td>
        <td><h1 style="font-weight:500;">Microsoft Cognitive Services Speech SDK JavaScript Quickstart</h1></td>
      </tr>
      <tr>
        <td align="right"><a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started" target="_blank">Subscription</a>:</td>
        <td><input id="subscriptionKey" type="text" size="40" value="subscription"></td>
      </tr>
      <tr>
        <td align="right">Region</td>
        <td><input id="serviceRegion" type="text" size="40" value="YourServiceRegion"></td>
      </tr>
      <tr>
        <td align="right">Application ID:</td>
        <td><input id="appId" type="text" size="60" value="YOUR_LANGUAGE_UNDERSTANDING_APP_ID"></td>
      </tr>
      <tr>
        <td></td>
        <td><button id="startIntentRecognizeAsyncButton">Start Intent Recognition</button></td>
      </tr>
      <tr>
        <td align="right" valign="top">Input Text</td>
        <td><textarea id="phraseDiv" style="display: inline-block;width:500px;height:200px"></textarea></td>
      </tr>
      <tr>
        <td align="right" valign="top">Result</td>
        <td><textarea id="statusDiv" style="display: inline-block;width:500px;height:100px"></textarea></td>
      </tr>
    </table>
  </div>

  <script src="microsoft.cognitiveservices.speech.sdk.bundle.js"></script>

  <script>
  // Note: Replace the URL with a valid endpoint to retrieve
  //       authorization tokens for your subscription.
  var authorizationEndpoint = "token.php";

  function RequestAuthorizationToken() {
    if (authorizationEndpoint) {
      var a = new XMLHttpRequest();
      a.open("GET", authorizationEndpoint);
      a.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
      a.send("");
      a.onload = function() {
                var token = JSON.parse(atob(this.responseText.split(".")[1]));
                serviceRegion.value = token.region;
                authorizationToken = this.responseText;
                subscriptionKey.disabled = true;
                subscriptionKey.value = "using authorization token (hit F5 to refresh)";
                console.log("Got an authorization token: " + token);
      }
    }
  }
  </script>

  <script>
    // status fields and start button in UI
    var phraseDiv;
    var statusDiv;
    var startIntentRecognizeAsyncButton;

    // subscription key, region, and appId for LUIS services.
    var subscriptionKey, serviceRegion, appId;
    var authorizationToken;
    var SpeechSDK;
    var recognizer;

    document.addEventListener("DOMContentLoaded", function () {
      startIntentRecognizeAsyncButton = document.getElementById("startIntentRecognizeAsyncButton");
      subscriptionKey = document.getElementById("subscriptionKey");
      serviceRegion = document.getElementById("serviceRegion");
      appId = document.getElementById("appId");
      phraseDiv = document.getElementById("phraseDiv");
      statusDiv = document.getElementById("statusDiv");

      startIntentRecognizeAsyncButton.addEventListener("click", function () {
        startIntentRecognizeAsyncButton.disabled = true;
        phraseDiv.innerHTML = "";
        statusDiv.innerHTML = "";
      });

      if (!!window.SpeechSDK) {
        SpeechSDK = window.SpeechSDK;
        startIntentRecognizeAsyncButton.disabled = false;

        document.getElementById('content').style.display = 'block';
        document.getElementById('warning').style.display = 'none';

        // in case we have a function for getting an authorization token, call it.
        if (typeof RequestAuthorizationToken === "function") {
          RequestAuthorizationToken();
        }
      }
    });
  </script>

创建语音配置Create a Speech configuration

在初始化 SpeechRecognizer 对象之前,需要创建一个使用订阅密钥和订阅区域的配置。Before you can initialize a SpeechRecognizer object, you need to create a configuration that uses your subscription key and subscription region. 将此代码插入 startRecognizeOnceAsyncButton.addEventListener() 方法。Insert this code in the startRecognizeOnceAsyncButton.addEventListener() method.

备注

语音 SDK 将默认使用 en-us 作为语言进行识别。若要了解如何选择源语言,请参阅指定语音转文本的源语言The Speech SDK will default to recognizing using en-us for the language, see Specify source language for speech to text for information on choosing the source language.

        // if we got an authorization token, use the token. Otherwise use the provided subscription key
        var speechConfig;
        if (authorizationToken) {
          speechConfig = SpeechSDK.SpeechConfig.fromAuthorizationToken(authorizationToken, serviceRegion.value);
        } else {
          if (subscriptionKey.value === "" || subscriptionKey.value === "subscription") {
            alert("Please enter your Microsoft Cognitive Services Speech subscription key!");
            return;
          }
          startIntentRecognizeAsyncButton.disabled = false;
          speechConfig = SpeechSDK.SpeechConfig.fromSubscription(subscriptionKey.value, serviceRegion.value);
        }

        speechConfig.speechRecognitionLanguage = "en-US";

创建音频配置Create an Audio configuration

现在,需要创建指向输入设备的 AudioConfig 对象。Now, you need to create an AudioConfig object that points to your input device. 将此代码插入语音配置下的 startIntentRecognizeAsyncButton.addEventListener() 方法。Insert this code in the startIntentRecognizeAsyncButton.addEventListener() method, right below your Speech configuration.

        var audioConfig = SpeechSDK.AudioConfig.fromDefaultMicrophoneInput();

初始化 IntentRecognizerInitialize a IntentRecognizer

现在,使用之前创建的 SpeechConfigAudioConfig 对象创建 IntentRecognizer 对象。Now, let's create the IntentRecognizer object using the SpeechConfig and AudioConfig objects created earlier. 将此代码插入 startIntentRecognizeAsyncButton.addEventListener() 方法。Insert this code in the startIntentRecognizeAsyncButton.addEventListener() method.

        recognizer = new SpeechSDK.IntentRecognizer(speechConfig, audioConfig);

添加 LanguageUnderstandingModel 和意向Add a LanguageUnderstandingModel and Intents

需要将 LanguageUnderstandingModel 与意向识别器相关联,并添加要识别的意向。You need to associate a LanguageUnderstandingModel with the intent recognizer and add the intents you want recognized. 我们将使用预生成的域中的意向进行主自动化。We're going to use intents from the prebuilt domain for home automation.

将此代码插入到你的 IntentRecognizer 下方。Insert this code below your IntentRecognizer. 请确保将 "YourLanguageUnderstandingAppId" 替换为 LUIS 应用 ID。Make sure that you replace "YourLanguageUnderstandingAppId" with your LUIS app ID.

        if (appId.value !== "" && appId.value !== "YOUR_LANGUAGE_UNDERSTANDING_APP_ID") {
          var lm = SpeechSDK.LanguageUnderstandingModel.fromAppId(appId.value);

          recognizer.addAllIntents(lm);
        }

识别意向Recognize an intent

IntentRecognizer 对象中,我们将调用 recognizeOnceAsync() 方法。From the IntentRecognizer object, you're going to call the recognizeOnceAsync() method. 此方法是告知语音服务你要发送单个需识别的短语,在确定该短语后会停止识别语音。This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.

在模型添加项代码下面插入以下代码:Insert this code below the model addition:

        recognizer.recognizeOnceAsync(
          function (result) {
            window.console.log(result);
  
            phraseDiv.innerHTML = result.text + "\r\n";
  
            statusDiv.innerHTML += "(continuation) Reason: " + SpeechSDK.ResultReason[result.reason];
            switch (result.reason) {
              case SpeechSDK.ResultReason.RecognizedSpeech:
                statusDiv.innerHTML += " Text: " + result.text;
                break;
              case SpeechSDK.ResultReason.RecognizedIntent:
                statusDiv.innerHTML += " Text: " + result.text + " IntentId: " + result.intentId;
                
                // The actual JSON returned from Language Understanding is a bit more complex to get to, but it is available for things like
                // the entity name and type if part of the intent.
                statusDiv.innerHTML += " Intent JSON: " + result.properties.getProperty(SpeechSDK.PropertyId.LanguageUnderstandingServiceResponse_JsonResult);
                phraseDiv.innerHTML += result.properties.getProperty(SpeechSDK.PropertyId.LanguageUnderstandingServiceResponse_JsonResult) + "\r\n";
                break;
              case SpeechSDK.ResultReason.NoMatch:
                var noMatchDetail = SpeechSDK.NoMatchDetails.fromResult(result);
                statusDiv.innerHTML += " NoMatchReason: " + SpeechSDK.NoMatchReason[noMatchDetail.reason];
                break;
              case SpeechSDK.ResultReason.Canceled:
                var cancelDetails = SpeechSDK.CancellationDetails.fromResult(result);
                statusDiv.innerHTML += " CancellationReason: " + SpeechSDK.CancellationReason[cancelDetails.reason];
              
              if (cancelDetails.reason === SpeechSDK.CancellationReason.Error) {
                statusDiv.innerHTML += ": " + cancelDetails.errorDetails;
              }
            break;
            }
            statusDiv.innerHTML += "\r\n";
            startIntentRecognizeAsyncButton.disabled = false;
          },
          function (err) {
            window.console.log(err);
    
            phraseDiv.innerHTML += "ERROR: " + err;
            startIntentRecognizeAsyncButton.disabled = false;
          });

查看代码Check your code

<!DOCTYPE html>
<html>
<head>
  <title>Microsoft Cognitive Services Speech SDK JavaScript Quickstart</title>
  <meta charset="utf-8" />
</head>
<body style="font-family:'Helvetica Neue',Helvetica,Arial,sans-serif; font-size:13px;">
  <!-- <uidiv> -->
  <div id="warning">
    <h1 style="font-weight:500;">Speech Recognition Speech SDK not found (microsoft.cognitiveservices.speech.sdk.bundle.js missing).</h1>
  </div>
  
  <div id="content" style="display:none">
    <table width="100%">
      <tr>
        <td></td>
        <td><h1 style="font-weight:500;">Microsoft Cognitive Services Speech SDK JavaScript Quickstart</h1></td>
      </tr>
      <tr>
        <td align="right"><a href="https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started" target="_blank">Subscription</a>:</td>
        <td><input id="subscriptionKey" type="text" size="40" value="subscription"></td>
      </tr>
      <tr>
        <td align="right">Region</td>
        <td><input id="serviceRegion" type="text" size="40" value="YourServiceRegion"></td>
      </tr>
      <tr>
        <td align="right">Application ID:</td>
        <td><input id="appId" type="text" size="60" value="YOUR_LANGUAGE_UNDERSTANDING_APP_ID"></td>
      </tr>
      <tr>
        <td></td>
        <td><button id="startIntentRecognizeAsyncButton">Start Intent Recognition</button></td>
      </tr>
      <tr>
        <td align="right" valign="top">Input Text</td>
        <td><textarea id="phraseDiv" style="display: inline-block;width:500px;height:200px"></textarea></td>
      </tr>
      <tr>
        <td align="right" valign="top">Result</td>
        <td><textarea id="statusDiv" style="display: inline-block;width:500px;height:100px"></textarea></td>
      </tr>
    </table>
  </div>
  <!-- </uidiv> -->

  <!-- <speechsdkref> -->
  <!-- Speech SDK reference sdk. -->
  <script src="microsoft.cognitiveservices.speech.sdk.bundle.js"></script>
  <!-- </speechsdkref> -->

  <!-- <authorizationfunction> -->
  <!-- Speech SDK Authorization token -->
  <script>
  // Note: Replace the URL with a valid endpoint to retrieve
  //       authorization tokens for your subscription.
  var authorizationEndpoint = "token.php";

  function RequestAuthorizationToken() {
    if (authorizationEndpoint) {
      var a = new XMLHttpRequest();
      a.open("GET", authorizationEndpoint);
      a.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
      a.send("");
      a.onload = function() {
        var token = JSON.parse(atob(this.responseText.split(".")[1]));
        serviceRegion.value = token.region;
        authorizationToken = this.responseText;
        subscriptionKey.disabled = true;
        subscriptionKey.value = "using authorization token (hit F5 to refresh)";
        console.log("Got an authorization token: " + token);
      }
    }
  }
  </script>
  <!-- </authorizationfunction> -->

  <!-- <quickstartcode> -->
  <!-- Speech SDK USAGE -->
  <script>
    // status fields and start button in UI
    var phraseDiv;
    var statusDiv;
    var startIntentRecognizeAsyncButton;

    // subscription key and region for speech services.
    var subscriptionKey, serviceRegion, appId;
    var authorizationToken;
    var SpeechSDK;
    var recognizer;

    document.addEventListener("DOMContentLoaded", function () {

      startIntentRecognizeAsyncButton = document.getElementById("startIntentRecognizeAsyncButton");
      subscriptionKey = document.getElementById("subscriptionKey");
      serviceRegion = document.getElementById("serviceRegion");
      appId = document.getElementById("appId");
      phraseDiv = document.getElementById("phraseDiv");
      statusDiv = document.getElementById("statusDiv");

      startIntentRecognizeAsyncButton.addEventListener("click", function () {
        startIntentRecognizeAsyncButton.disabled = true;
        phraseDiv.innerHTML = "";
        statusDiv.innerHTML = "";

        var audioConfig = SpeechSDK.AudioConfig.fromDefaultMicrophoneInput();
        // if we got an authorization token, use the token. Otherwise use the provided subscription key
        var speechConfig;
        if (authorizationToken) {
          speechConfig = SpeechSDK.SpeechConfig.fromAuthorizationToken(authorizationToken, serviceRegion.value);
        } else {
          if (subscriptionKey.value === "" || subscriptionKey.value === "subscription") {
            alert("Please enter your Microsoft Cognitive Services Speech subscription key!");
            startIntentRecognizeAsyncButton.disabled = false;
            return;
          }
          speechConfig = SpeechSDK.SpeechConfig.fromSubscription(subscriptionKey.value, serviceRegion.value);
        }

        speechConfig.speechRecognitionLanguage = "en-US";
        recognizer = new SpeechSDK.IntentRecognizer(speechConfig, audioConfig);

        // Set up a Language Understanding Model from Language Understanding Intelligent Service (LUIS).
        // See https://www.luis.ai/home for more information on LUIS.
        if (appId.value !== "" && appId.value !== "YOUR_LANGUAGE_UNDERSTANDING_APP_ID") {
          var lm = SpeechSDK.LanguageUnderstandingModel.fromAppId(appId.value);

          recognizer.addAllIntents(lm);
        }

        recognizer.recognizeOnceAsync(
          function (result) {
            window.console.log(result);
  
            phraseDiv.innerHTML = result.text + "\r\n";
  
            statusDiv.innerHTML += "(continuation) Reason: " + SpeechSDK.ResultReason[result.reason];
            switch (result.reason) {
              case SpeechSDK.ResultReason.RecognizedSpeech:
                statusDiv.innerHTML += " Text: " + result.text;
                break;
              case SpeechSDK.ResultReason.RecognizedIntent:
                statusDiv.innerHTML += " Text: " + result.text + " IntentId: " + result.intentId;
                
                // The actual JSON returned from Language Understanding is a bit more complex to get to, but it is available for things like
                // the entity name and type if part of the intent.
                statusDiv.innerHTML += " Intent JSON: " + result.properties.getProperty(SpeechSDK.PropertyId.LanguageUnderstandingServiceResponse_JsonResult);
                phraseDiv.innerHTML += result.properties.getProperty(SpeechSDK.PropertyId.LanguageUnderstandingServiceResponse_JsonResult) + "\r\n";
                break;
              case SpeechSDK.ResultReason.NoMatch:
                var noMatchDetail = SpeechSDK.NoMatchDetails.fromResult(result);
                statusDiv.innerHTML += " NoMatchReason: " + SpeechSDK.NoMatchReason[noMatchDetail.reason];
                break;
              case SpeechSDK.ResultReason.Canceled:
                var cancelDetails = SpeechSDK.CancellationDetails.fromResult(result);
                statusDiv.innerHTML += " CancellationReason: " + SpeechSDK.CancellationReason[cancelDetails.reason];
              
              if (cancelDetails.reason === SpeechSDK.CancellationReason.Error) {
                statusDiv.innerHTML += ": " + cancelDetails.errorDetails;
              }
            break;
            }
            statusDiv.innerHTML += "\r\n";
            startIntentRecognizeAsyncButton.disabled = false;
          },
          function (err) {
            window.console.log(err);
    
            phraseDiv.innerHTML += "ERROR: " + err;
            startIntentRecognizeAsyncButton.disabled = false;
          });
        });

      if (!!window.SpeechSDK) {
        SpeechSDK = window.SpeechSDK;
        startIntentRecognizeAsyncButton.disabled = false;

        document.getElementById('content').style.display = 'block';
        document.getElementById('warning').style.display = 'none';

        // in case we have a function for getting an authorization token, call it.
        if (typeof RequestAuthorizationToken === "function") {
          RequestAuthorizationToken();
        }
      }
    });

  </script>
  <!-- </quickstartcode> -->
</body>
</html>

创建令牌源(可选)Create the token source (optional)

如果要在 web 服务器上承载网页,可以为演示应用程序提供令牌源。In case you want to host the web page on a web server, you can optionally provide a token source for your demo application. 这样一来,订阅密钥永远不会离开服务器,并且用户可以在不输入任何授权代码的情况下使用语音功能。That way, your subscription key will never leave your server while allowing users to use speech capabilities without entering any authorization code themselves.

创建名为 token.php 的新文件。Create a new file named token.php. 此示例假设 Web 服务器在启用 cURL 的情况下支持 PHP 脚本语言。In this example we assume your web server supports the PHP scripting language with curl enabled. 输入以下代码:Enter the following code:

<?php
header('Access-Control-Allow-Origin: ' . $_SERVER['SERVER_NAME']);

// Replace with your own subscription key and service region (e.g., "westus").
$subscriptionKey = 'YourSubscriptionKey';
$region = 'YourServiceRegion';

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'https://' . $region . '.api.cognitive.microsoft.com/sts/v1.0/issueToken');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, '{}');
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json', 'Ocp-Apim-Subscription-Key: ' . $subscriptionKey));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
echo curl_exec($ch);
?>

备注

授权令牌仅具有有限的生存期。Authorization tokens only have a limited lifetime. 此简化示例不显示如何自动刷新授权令牌。This simplified example does not show how to refresh authorization tokens automatically. 作为用户,你可以手动重载页面或点击 F5 刷新。As a user, you can manually reload the page or hit F5 to refresh.

在本地生成和运行示例Build and run the sample locally

要启动应用,双击 index.html 文件或使用你喜欢的 web 浏览器打开 index.html。To launch the app, double-click on the index.html file or open index.html with your favorite web browser. 它将显示一个简单 GUI,用于输入 LUIS 密钥、LUIS 区域和 LUIS 应用程序 ID。It will present a simple GUI allowing you to enter your LUIS key, LUIS region, and LUIS Application ID. 输入这些字段后,可以单击相应按钮以使用麦克风触发识别。Once those fields have been entered, you can click the appropriate button to trigger a recognition using the microphone.

备注

此方法对 Safari 浏览器不起作用。This method doesn't work on the Safari browser. 在 Safari 上,示例网页需要托管在 Web 服务器上;Safari 不允许从本地文件加载的网站使用麦克风。On Safari, the sample web page needs to be hosted on a web server; Safari doesn't allow websites loaded from a local file to use the microphone.

通过 web 服务器生成并运行示例Build and run the sample via a web server

若要启动应用,请打开你最喜欢的 Web 浏览器,将其指向托管文件夹的公共 URL,输入 LUIS 区域以及 LUIS 应用程序 ID,并使用麦克风触发识别。To launch your app, open your favorite web browser and point it to the public URL that you host the folder on, enter your LUIS region as well as your LUIS Application ID, and trigger a recognition using the microphone. 配置后,它将从令牌源获取令牌并开始识别语音命令。If configured, it will acquire a token from your token source and begin recognizing spoken commands.

后续步骤Next steps

查看或下载 GitHub 上所有的语音 SDK 示例View or download all Speech SDK Samples on GitHub.

其他语言和平台支持Additional language and platform support

如果已单击此选项卡,则可能看不到采用你偏好的编程语言的快速入门。If you've clicked this tab, you probably didn't see a quickstart in your favorite programming language. 别担心,我们在 GitHub 上提供了其他快速入门材料和代码示例。Don't worry, we have additional quickstart materials and code samples available on GitHub. 使用表格查找适用于编程语言和平台/OS 组合的相应示例。Use the table to find the right sample for your programming language and platform/OS combination.

语言Language 代码示例Code samples
C#C# .NET Framework.NET CoreUWPUnityXamarin.NET Framework, .NET Core, UWP, Unity, Xamarin
C++C++ WindowsLinuxmacOSWindows, Linux, macOS
JavaJava AndroidJREAndroid, JRE
JavascriptJavaScript Browser、Node.jsBrowser, Node.js
Objective-CObjective-C iOSmacOSiOS, macOS
PythonPython Windows、Linux 和 macOSWindows, Linux, macOS
SwiftSwift iOSmacOSiOS, macOS