Deploy and test a conversational language understanding model
After you have trained a model on your dataset, you're ready to deploy it. After deploying your model, you'll be able to query it for predictions.
Tip
Before deploying a model, make sure to view the model details to make sure that the model is performing as expected.
Deploy model
Deploying a model hosts and makes it available for predictions through an endpoint.
When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated to it.
Conversation projects deployments
Click on Add deployment to submit a new deployment job
In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name.
Swap deployments
If you would like to swap the models between two deployments, simply select the two deployment names you want to swap and click on Swap deployments. From the window that appears, select the deployment name you want to swap with.
Delete deployment
To delete a deployment, select the deployment you want to delete and click on Delete deployment.
Tip
If you're using the REST API, see the quickstart and REST API reference documentation for examples and more information.
Note
You can only have ten deployment names.
Send a Conversational Language Understanding request
Once your model is deployed, you can begin using the deployed model for predictions. Outside of the test model page, you can begin calling your deployed model via API requests to your provided custom endpoint. This endpoint request obtains the intent and entity predictions defined within the model.
You can get the full URL for your endpoint by going to the Deploy model page, selecting your deployed model, and clicking on "Get prediction URL".
Add your key to the Ocp-Apim-Subscription-Key header value, and replace the query and language parameters.
Tip
As you construct your requests, see the quickstart and REST API reference documentation for more information.
Use the client libraries (Azure SDK)
You can also use the client libraries provided by the Azure SDK to send requests to your model.
Note
The client library for conversational language understanding is only available for:
- .NET
- Python
Go to your resource overview page in the Azure portal
From the menu on the left side, select Keys and Endpoint. Use endpoint for the API requests and you will need the key for
Ocp-Apim-Subscription-Keyheader.Download and install the client library package for your language of choice:
Language Package version .NET 5.2.0-beta.2 Python 5.2.0b2 After you've installed the client library, use the following samples on GitHub to start calling the API.
See the following reference documentation for more information:
API response for a conversations project
In a conversations project, you'll get predictions for both your intents and entities that are present within your project.
- The intents and entities include a confidence score between 0.0 to 1.0 associated with how confident the model is about predicting a certain element in your project.
- The top scoring intent is contained within its own parameter.
- Only predicted entities will show up in your response.
- Entities indicate:
- The text of the entity that was extracted
- Its start location denoted by an offset value
- The length of the entity text denoted by a length value.
Next steps
Povratne informacije
Pošalјite i prikažite povratne informacije za