Query deployment for intent predictions

After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment. You can query the deployment programmatically Prediction API or through the Client libraries (Azure SDK).

Test deployed model

You can use Language Studio to submit an utterance, get predictions and visualize the results.

To test your model from Language Studio

  1. Select Testing deployments from the left side menu.

  2. Select the model you want to test. You can only test models that are assigned to deployments.

  3. From deployment name dropdown, select your deployment name.

  4. In the text box, enter an utterance to test.

  5. From the top menu, select Run the test.

  6. After you run the test, you should see the response of the model in the result. You can view the results in entities cards view, or view it in JSON format.

    A screenshot showing how to test a model in Language Studio.


Send an orchestration workflow request

  1. After the deployment job is completed successfully, select the deployment you want to use and from the top menu select Get prediction URL.

    Screenshot showing how to get a prediction U R L.

  2. In the window that appears, copy the sample request URL and body into your command line. Replace <YOUR_QUERY_HERE> with the actual text you want to send to extract intents and entities from.

  3. Submit the POST cURL request in your terminal or command prompt. You'll receive a 202 response with the API results if the request was successful.

Next steps