Configure your Azure Percept voice assistant application

This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial for the process of creating a voice assistant, see Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio.

Update your voice assistant configuration

  1. Open the Azure portal and type IoT Hub into the search bar. Select the icon to open the IoT Hub page.

  2. On the IoT Hub page, select the IoT Hub to which your device was provisioned.

  3. Select IoT Edge under Automatic Device Management in the left navigation menu to view all devices connected to your IoT Hub.

  4. Select the device to which your voice assistant application was deployed.

  5. Select Set Modules.

    Screenshot of device page with Set Modules highlighted.

  6. Verify that the following entry is present under the Container Registry Credentials section. Add credentials if necessary.

    Name Address Username Password
    azureedgedevices azureedgedevices.azurecr.io devkitprivatepreviewpull
  7. In the IoT Edge Modules section, select azureearspeechclientmodule.

    Screenshot showing list of all IoT Edge modules on the device.

  8. Select the Module Settings tab. Verify the following configuration:

    Image URI Restart Policy Desired Status
    mcr.microsoft.com/azureedgedevices/azureearspeechclientmodule: preload-devkit always running

    If your settings don't match, edit them and select Update.

  9. Select the Environment Variables tab. Verify that there are no environment variables defined.

  10. Select the Module Twin Settings tab. Update the speechConfigs section as follows:

    "speechConfigs": {
        "appId": "<Application id for custom command project>",
        "key": "<Speech Resource key for custom command project>",
        "region": "<Region for the speech service>",
        "keywordModelUrl": "https://aedsamples.blob.core.windows.net/speech/keyword-tables/computer.table",
        "keyword": "computer"
    }
    

    Note

    The keyword used above is a default publicly available keyword. If you wish to use your own, you can add your own custom keyword by uploading a created table file to blob storage. Blob storage needs to be configured with either anonymous container access or anonymous blob access.

How to find out appId, key and region

To locate your appID, key, and region, go to Speech Studio:

  1. Sign in and select the appropriate speech resource.

  2. On the Speech Studio home page, select Custom Commands under Voice Assistants.

  3. Select your target project.

    Screenshot of project page in Speech Studio.

  4. Select Settings on the left-hand menu panel.

  5. The appID and key will be located under the General settings tab.

    Screenshot of speech project general settings.

  6. To find your region, open the LUIS resources tab within the settings. The Authoring resource selection will contain region information.

    Screenshot of speech project LUIS resources.

  7. After entering your speechConfigs information, select Update.

  8. Select the Routes tab at the top of the Set modules page. Ensure you have a route with the following value:

    FROM /messages/modules/azureearspeechclientmodule/outputs/* INTO $upstream
    

    Add the route if it doesn't exist.

  9. Select Review + Create.

  10. Select Create.

Next steps

After updating your voice assistant configuration, return to the demo in Azure Percept Studio to interact with the application.