January 2017

Volume 32 Number 1

[Cognitive Services]

Enable Natural Language Interaction with LUIS

By Ashish Sahu

The technological landscape has changed quite radically in recent years. Computing capabilities moved from PCs to smartphones and wearables, while adding even more power to these devices. This advancement in technology has also changed the way we interact with our devices. Physical keyboards have been replaced with software implementations and the input methods changed from stylus to a mere tap of the fingers. It was only a matter of time before we started looking for even more effortless ways to interact with our computing devices.

Speech is how we interact with each other and now we’re at the verge of using speech to also interact with all our smart devices. The recent launch of Bot Framework and Cognitive Services at the Build 2016 conference is a step toward that vision. Among these amazing cognitive services, Language Understanding Intelligence Service (LUIS) provides you with the capabilities to understand the natural language queries and return actionable information that you can wire up in your code to improve the UX.

In this article, I’ll explore the capabilities of LUIS and look at different ways you can use it in your apps and services.

How Does LUIS Work?

LUIS is built on the interactive machine learning and language understanding research from Microsoft Research. The book “Machine Learning” (McGraw Hill, 1997) by Tom Mitchell defines machine learning as:

“A computer program is said to learn to perform a task T from experience E, if its performance at task T, as measured by a performance metric P, improves with experience E over time.”

Like any other machine learning paradigm, LUIS is an iteration of this concept. It uses a language model and set of training examples to parse spoken language and return only the interesting parts that you, as developers, can use to delight your users.

With LUIS, apart from using your own purpose-specific language model, you can also leverage the same pre-existing and pre-built language models used by Bing and Cortana.

LUIS has a very specific use case—you can leverage LUIS anywhere you have a need to let users interact with your apps using speech. Most digital assistants, voice-enabled apps/devices, and bots fall into this category, but you’re free to use your imagination.

Where Can I Use LUIS?

Using LUIS with your apps and services requires initial setup. The homework you need to complete consists of understanding the scenario and anticipating the interaction that’ll take place between the apps and the users. Understanding and anticipating the interaction will help you build the language model to use with LUIS and to come up with the basic natural utterance to train LUIS to parse them.

Language Models

Language models are specific to your LUIS applications. They form the basis of understanding what the users mean when they talk to your apps and services. There are two core parts of a language model–“intents” and “entities.” LUIS application uses the intents and entities to process the natural language queries and derive the intention and the topics of interest to the users with help from the training examples, also called “utterances.” LUIS applications always contain a default intent called “None.” This intent gets mapped to all the utterances that couldn’t be mapped to any other intents. In the context of an app, intents are actions that the users intend to perform while the entities get filtered to the topics that your apps and services are designed to handle.

An example to understand this would be to imagine a shopping app and the following model:

"intents": [
    "name": "ShowItems"
    "name": "BuyItems"
"entities": [
    "name": "Item"

The majority of time spent in a shopping app might be with sale items and when someone says, “Show me red scarves,” the model will map this utterance to ShowItems as the intent and red scarves to the entity item. At the same time, you can map an utterance to the BuyItems intent and thus initiate the checkout process when someone says, “I would like to pay now.”


LUIS intents also support action binding, which lets you bind parameters to your intents. Actionable intents fire only when these parameters are provided, as well. In particular, when you use action binding with bots, LUIS can query the parameters from the users interactively.

Based on the examples and active learning, LUIS starts detecting the intents in the queries posted to it. However, because the language queries are tricky for the computer applications, LUIS also scores them between 0 and 1 to denote its confidence; higher scope denotes higher confidence.


LUIS entities, as explained here, are the topics of interest to your users. If you’re building a news app, entities will map to the news topics, and in case of weather apps, they map to locations in their very basic iterations.

LUIS active learning also starts showing up when you add a new utterance and you can see the appropriate entities color-coded to show the mappings visually.

Entities can have child elements and you can independently map each of them to individual parts of the utterances. You also have support for composite entities, which are a collection of characteristics that collectively build up to a single entity.

For better understanding, an entity called “Vehicles” can have child entities called “Cars” and “SUVs,” among other names. This relationship can help you map multiple entities into a larger category. In case of “Composite Entities,” the individual parts would denote one single entity with various properties of it. An example for a composite entity called Car is 2016 Black Ford Mustang, made up of year, color, make and model information.

Pre-Built Entities

Similar to data types in any programming language, the LUIS service includes a set of entities for a number of common entities, so you don’t have to go out and think about every possible term that your users may throw at you. Some examples include most common variations of date, time, numbers and geographical entities. You can include the pre-built entities in your application and use them in your labeling activities. Keep in mind that the behavior of pre-built entities cannot be changed.

An exhaustive list of pre-built entities can be found in the LUIS documentation.

While it’s possible to add numerous intents and entities (and pre-built entities) in your model, a word of caution is to keep it simple and precise. You can start with the most obvious utterances and add more of them to make it more natural for your users. But keep in mind that thinking ahead of the experience you want to build goes a long way in enhancing the UX and evolving the experience further. If you don’t plan ahead and change intents or entities in your models, you’ll have to label all the utterance and train your model all over again.

Let’s Go Build Something!

It’s time to build something and take a ride with LUIS. In this article, I’ll look at an inventory application. I’ll build a language model using the intents and entities, train the model, and then use this in a bot powered by the Bot Framework and a Universal Windows Platform (UWP) app using the REST endpoint that LUIS exposes for me to use all its capabilities.

To keep it simple, we’re going to deal with an inventory of clothes. First, log on to the LUIS portal at luis.ai and create a new application.

This being an inventory application, you’ll use it to track inventory of stocks and for that purpose, the first intent that you’re going to add is ListItems. You’ll map this intent to all the utterances where the user’s intent is to query the inventory as a whole or for an item.

When you’re creating an intent, the LUIS application will also ask for an example utterance. This utterance is the first natural language query to trigger this intent. Click on the “+” button next to “Intents” and add the name of the intent as “ListItems.” You’ll keep the example utterance simple: “Show me the inventory.”

Saving this intent takes you to the “new utterance” screen. Figure 1 shows the example utterance along with the ListItems intent mapped to it within the dropdown menu next to it.

Example of Utterance and Intent
Figure 1 Example of Utterance and Intent

Click on the Submit button to add this utterance to your LUIS application. Before LUIS can start working its magic for you, you must add more such utterances to help LUIS understand the intents more accurately. Keep in mind that the utterances must project the same intent as the one in Figure 1, but at the same time, they should be something that users will say naturally when asking for stocks: “Show me the stocks” comes to mind.

Now that you’ve added two utterances to your application, click on the Train button in the lower-left corner to see if the LUIS application has enough information to understand when to trigger the ListItems intent. The framework triggers the training periodically on its own, as well. Training your model after adding a few examples can help you identify any problem with the model early and take corrective actions. Because LUIS framework also features active learning, you’ll benefit from training as the example utterance will be scored automatically for you as you add them.

Moving forward with your application, it’s also natural to ask about the inventory of certain items you’re stocking, so also think about examples such as “Show me the stocks of red scarves” and “How many shirts do we have in stock?”

However, these queries are different from the ones you’ve added so far. These queries contain the terms “red scarves” and “shirts.” This means you need more than your intent, ListItems, to return the right results back to your users. You need an entity called “Item,” which you’ll map to these terms to add more intelligence in your language model.

You can add these utterances to your LUIS application and label the entities later, but in this case, you’ll add entity first and then the utterances. Click on the “+” button next to Entities and name this entity Item.

Now, you can add those queries mentioned earlier and label them with the intent and entity at the same time. To do that, just add your utterance and if the intent hasn’t already been mapped with a score, select the relevant intent and then click on the term “shirts” to map it with the Item entity.

Select Item from the list to label it an item. Add the other example already mentioned—“Show me the stocks of red scarves”—and instead of mapping just “scarves,” select “red” and “scarves,” both as the entity Item. Note: A favorite browser, Edge, doesn’t let you select multiple words in the LUIS portal. Use any other browser of your choice to do this.

Also note that the term “red scarves” falls in the category of Composite Entities because they denote one single entity together—scarves, which have red in them. As explained earlier, Composite Entities are made up of multiple parts but represent one unit of object such as “black shoes” and “2016 Ford Mustang.” However, for the sake of simplicity, you’re going to treat them as a single entity.

Train the model again and see if the active learning in LUIS kicks in. Now try adding an utterance such as, “How many wallets do we have in stock,” or, “Show me the stocks of trousers.”

You might find the result inter­esting. Notice that the term “wallets” gets mapped to Item entity but “trousers” doesn’t. Don’t panic, it just means that LUIS needs a few more examples to make sense of utterances that follow the same pattern. To do that, map “trousers” to Item entity and train your model one more time.

To test this, try adding “Show me the stocks of red shirts” or “Show me the stocks of pants” and verify that red shirts and pants get mapped to the right intents and entities. I hope your mileage doesn’t vary from mine so far.

Using the Suggest section in the portal, you can also get suggestions from the Cortana logs for individual intents and entities.

Once your intents and entities are getting mapped correctly, you can move on to the next phases of your journey on LUIS.

Using LUIS with Real Apps

This LUIS application isn’t useful for your users now; you need to connect to this application from your apps and services. Because the LUIS application is exposed via REST endpoints and the responses are returned in JSON format, you can use LUIS services from any platform or programming language that can connect using HTTPS protocol and parse JSON responses.

Note: The LUIS portal also exposes the export functionality from the “My Application” portion, which exports your LUIS application as a JSON document to make changes offline and import it back. In combination with the LUIS APIs and the C# SDK, you can integrate LUIS in your DevOps processes, as well.

You also need to publish your LUIS app before you can start calling it from your apps, which is as simple as it gets: Just click on the Publish button and click again on the Publish Web service button.

Notice that the REST endpoint URI also includes your LUIS application ID and the subscription key. Protect this information as much as you would any other credentials as it can otherwise lead to disruption of the service and have a financial impact.

Once the application has been published, you should be able to test it by typing any other example in the Query input box and test the accuracy of your model. Try that by entering “how many ties do we have in the stock?” and press Enter on your keyboard.

This will open a new browser window and you should get a response in the JSON format as shown in Figure 2.

Figure 2 Testing the LUIS App

 "query": "how many ties do we have in the stock?"
 "intents": [{
  "intent": "ListItems",
  "score": 0.9999995
 }, {
  "intent": "None",
  "score": 0.0582637675
 "entities": [{
  "entity": "ties",
  "type": "Item",
  "startIndex": 9,
  "endIndex": 12,
  "score": 0.903107

The response includes the query string passed to the LUIS app, along with the intents and entities detected in the query. Also included is the individual scoring information for each of them. These scores are important because they’re direct indicators of how your language model and the training are performing. As you add more utterance and make any changes to your model, this dialog box also provides you with an option to publish your updates. Updating your LUIS application after every training session is important because it’ll keep using the older training model and the response from the HTTP endpoint will defer from your expectations.

Analyzing Performance of Language Model

Adding too many variations of the language can result in errors and might force you to change your language model. To address these issues, the LUIS portal features a Performance Analysis section. You can use this section to understand how your LUIS app is performing when it comes to detecting intents and entities. You can get a color-coded performance overview of all of your intents and entities in this section.

Depending on the training, examples, and language model used, your LUIS app might also run into issues where it’s unable to map intents or entities correctly. There might also be cases where adding multiple types of utterance confuses the LUIS service. These issues can be easily tracked with the performance drill-down using Performance Analysis. The dropdown menu also lets you drill down on analysis to individual intent and entities.

You can also get similar information for the entities in your language model.

This information, along with the Review Labels section of the portal, can help you look at and analyze any errors with your language model.

Calling LUIS From C# UWP/ASP.NET Apps

If you’re building a UWP app or ASP.NET Web app using C#, you can use the classes denoted in Figure 3 to deserialize the JSON response.

Figure 3 Classes to Deserialize the JSON Response

public class LUISResponse 
    public string query { get; set; } 
    public lIntent[] intents { get; set; } 
    public lEntity[] entities { get; set; } 

public class lIntent 
    public string intent { get; set; } 
    public float score { get; set; } 

public class lEntity 
    public string entity { get; set; } 
    public string type { get; set; } 
    public int startIndex { get; set; } 
    public int endIndex { get; set; } 
    public float score { get; set; } 

The code in Figure 4 in your C# UWP or ASP.NET app can use these classes to get the intent and entities information.

Figure 4 Code Used to Get Intent and Entities Information

private async Task LUISParse(string queryString) 
    using (var client = new HttpClient()) 
        string uri = 
          "https://api.projectoxford.ai/luis/v1/application?id=<YOUR LUIS APP ID> 
            &subscription-key=<YOUR LUIS APP KEY>&q=" + queryString; 
        HttpResponseMessage msg = await client.GetAsync(uri); 
        if (msg.IsSuccessStatusCode) 
            var jsonResponse = await msg.Content.ReadAsStringAsync(); 
            var _Data = JsonConvert.DeserializeObject<LUISResponse>(jsonResponse); 
            var entityFound = _Data.entities[0].entity;   
            var topIntent = _Data.intents[0].intent; 

Based on your requirements, you can run the response through a loop to extract multiple entities of different types, as well as score information about the intents detected in the query string.

Using LUIS with Bot Framework

If you’re using Bot Framework to build a bot and are looking to use LUIS to add natural language intelligence, you’ll be pleased to know that the Microsoft.Bot.Builder namespace in the Bot SDK makes it extremely easy to connect with your LUIS application and filter out the intents and entities. In the MessageController of your Bot Framework solution, add the following line to route all incoming messages to the class called LuisConnect:

await Microsoft.Bot.Builder.Dialogs.Conversation.SendAsync(activity,
  () => new LuisConnect());

Now add a class file called LuisConnect.cs in your project and change the code, as shown in Figure 5.

Figure 5 Adding the Class File LuisConnect.cs

using System;
using System.Net.Http;
using System.Threading.Tasks;
using System.Web.Http;
using Microsoft.Bot.Connector;

namespace BotApp2
    [LuisModel("<application-id>", "<subscription-key>")]
    public class Luis : LuisDialog<object>
        public async Task None(IDialogContext context, LuisResult result)
            stringmessage = 
              "I’m sorry I didn't understand. Try asking about stocks or inventory.";
            await context.PostAsync(message); context.Wait(MessageReceived);

        public async Task ListInventory(IDialogContext context, LuisResult result)
            string message = "";
            if (result.Entities.Count != 0 && result.Intents.Count 0 )
                message = $ "detected the intent \ "{ result.Intents[0].Intent}\"
                for \"{result.Entities[0].Entity}\". Was that right?";
                wait context.PostAsync(message); conext.Wait(MessageReceived);

        public async Task Start Async(IDialogContext context)

Run your bot locally and try asking questions such as, “Show me the stocks of shirts,” or, “How many belts do we have in stock?” and you should get the appropriate responses with the intents and entities back from the bot.

The most interesting part about the code in Figure 5 is that you just had to label your methods with [LuisIntent] and the SDK takes care of calling the LUIS application and getting back results from the LUIS service. This makes it really quick and simple to start adding the language intelligence in our apps.

Making It Better

The focus of this article is to make you familiar with the workings of LUIS and integration so I’ve used really simple examples. There are two more features of LUIS that are bound to make your life easier: Regex and Phrase List features.

Much like the name suggests, the Regex feature helps in matching a repetitive pattern in your phrases such as product codes. The Phrase List feature can be used as an interchangeable list of words or phrases to look for in your utterances. For example, in the application we have utterances that started with “Show me the stocks,” “Find me the stocks,” “How many,” and so on. Adding these phrases in a Phrase List called InventoryQueries at the start will remove the need to train your model with more examples for these utterances separately. I’ll leave that to you to explore and experience.

The Future

The LUIS offering is ready to be used in your apps but it’s still being improved and new features are being added frequently. There are some features that aren’t covered in this portal but are available for public preview. They’re exciting and still in development:

  • Integration with Bot Framework and Slack: You can try this out when publishing your LUIS app in Preview Portal. This integration lets you quickly integrate LUIS with Microsoft Bot Framework and Slack.
  • Dialog Support: Dialog support in LUIS lets you add conversational intelligence in your LUIS application so it can ask for more information from the users on its own if the query requires more information than provided by the users at first. For example, a flight app can prompt for a travel date if the user asks for flight information with just the city name
  • Action Fulfillment: This feature lets you fulfill the user-­triggered actions using the built-in and custom channel right from your LUIS app.

These features are exciting and enable more natural conversational interaction in your app with little effort. They need depth exploration on their own and I hope to do that soon.

Wrapping Up

I hope you now understand what LUIS can do for you and how effortlessly you can start leveraging it to add a more natural human interaction element to your apps.

In this article, I went through the basics of the LUIS service. I created a LUIS application, built and trained your language model to help you understand what users mean when they ask something. I also looked at the ways in which this LUIS application can be used from your apps, Web services and in your bots. A sample project that contains the LUIS model, UWP app and the bot sample code mentioned in this article can be found on GitHub at bit.ly/2eEmPsy.

Ashish Sahu is a senior technical evangelist, working with Developer Experience at Microsoft India, and helping ISVs and startups overcome technical challenges, adopt latest technologies, and evolve their solutions to the next level. He can be contacted at ashish.sahu@microsoft.com.

Thanks to the following Microsoft technical expert for reviewing this article: Srikantan Sankaran
Srikantan Sankaran is a technical evangelist from the DX team in India, based out of Bangalore. He works with numerous ISVs in India and helps them architect and deploy their solutions on Microsoft Azure.

Discuss this article in the MSDN Magazine forum