Testing example utterances in LUIS
Testing is the process of providing sample utterances to LUIS and getting a response of LUIS-recognized intents and entities.
You can test LUIS interactively, one utterance at a time, or provide a of utterances. While testing, you can compare the current active model's prediction response to the published model's prediction response.
What is a score in testing?
See Prediction score concepts to learn more about prediction scores.
Interactive testing is done from the Test panel of the LUIS portal. You can enter an utterance to see how intents and entities are identified and scored. If LUIS isn't predicting the intents and entities as you expect on an utterance in the testing panel, copy it to the Intent page as a new utterance. Then label the parts of that utterance for entities, and train LUIS.
See batch testing if you are testing more than one utterance at a time.
You can test using the endpoint with a maximum of two versions of your app. With your main or live version of your app set as the production endpoint, add a second version to the staging endpoint. This approach gives you three versions of an utterance: the current model in the Test pane of the LUIS website, and the two versions at the two different endpoints.
All endpoint testing counts toward your usage quota.
Do not log tests
If you test against an endpoint, and do not want the utterance logged, remember to use the
logging=false query string configuration.
Where to find utterances
LUIS stores all logged utterances in the query log, available for download on the LUIS portal from the Apps list page, as well as the LUIS authoring APIs.
Remember to train
Remember to train LUIS after you make changes to the model. Changes to the LUIS app are not seen in testing until the app is trained.
Learn best practices.
- Learn more about testing your utterances.