Volume 33 Number 12
Gain Insight from Conversations Using Process Mining with LUIS
By Zvi Topol
The Language Understanding Intelligence Service (LUIS) is a Microsoft Cognitive Services API that offers natural language understanding as a service for developers. As part of that service, when given user input, called an utterance, LUIS returns the intent detected behind the utterance, that is, what the user intends to ask about. In a previous article (msdn.com/magazine/mt847187), I discussed how to improve LUIS intent detection by leveraging open source tools that provide word-level insights into how intents are classified.
In this article, I’m going to take a step or two up the hierarchy and move from words to entire conversations. In particular, I’ll focus on how to get insights from conversational data, by which I mean data that’s composed of sequences of utterances that collectively make a conversation.
To that end, I’ll introduce process mining, a field of analytics specializing in deriving insights from sequences of events. I will then show how to transform conversational data, which is innately unstructured, into a structured dataset by applying LUIS to each utterance in a conversation. Finally, I’ll use process mining on the transformed, structured dataset to derive insights about the original conversations.
Let’s get started.
Process mining is a field of data analytics whose primary focus is the discovery of processes (sequences of events or states with a specific outcome) and the delivery of insights about such processes from event log data. Event log data is a dataset that specifies for each event the timestamp at which it occurred, along with other possible fields or attributes describing the event. (More on that later.)
Domains to which process-mining tools have been applied include financial services, health care and manufacturing. Examples include mortgage application process analysis to identify bottlenecks and delays in customer applications, patient status monitoring through the different departments/wards, all the way from patient admission through to release, and part-production process improvement on a manufacturing machine line.
Scaling the algorithms that power process mining such that they are able to process large datasets of events has been a critical area of focus in both academia and industry. Another challenge hampering the large-scale adoption of process mining is the fact that in many organizations, data isn’t currently collected and formatted in a way that it can be properly consumed by process-mining software. I expect this to change as more organizations become aware of the value of digital transformation and data-driven decision making.
Process mining has been an active research area for many years and, therefore, has benefited from the development of many algorithms that enable the discovery of processes and the deriving of insights. In this context, it’s worth mentioning Prof. Wil van der Aalst (bit.ly/2QNBvEW), currently at Aachen University, who is considered one of the founding fathers of this field. Prof. van der Aalst is also the author of “Process Mining: Data Science in Action” (Springer 2016), one of the important textbooks in the field and a good starting point if you’d like to deepen your knowledge about process-mining algorithms.
As far as mature commercial process-mining software products go, there are a number of companies with useful offerings, including Celonis (celonis.com), Fluxicon (fluxicon.com) and Minit (minit.io). There are also a few open source products, in particular ProM (promtools.org/doku.php), which offers integration with data mining tools such as RapidMiner (rapidprom.org).
In addition to standalone process-mining products, you’ll also find packages and libraries available for popular data analysis languages such as R and Python that provide programmatic interfaces to process-mining functionality. Some examples include edaR—exploratory and descriptive event-based data analysis in R (bit.ly/2OSGozv) and PMLab, which is an interactive environment written in Python that includes building blocks for programmatically applying process-mining techniques (github.com/pmlab/pmlab-full).
In this article, I’ll use Disco as the primary process-mining tool. You can download a trial version from fluxicon.com/disco. Disco has an intuitive UI, is easy to use and has a comprehensive set of features. It’s also a popular tool in various online introductory classes on process mining.
While I’ll show how to derive insights from conversational data using Disco, I won’t fully cover all its features. For that, you’ll want to take a look at the more comprehensive manual that can be downloaded from bit.ly/2CHtfnc.
Now that I’ve introduced process mining, I want to delve into something I mentioned briefly earlier—event log data. But what exactly is event log data or, more usefully in this case, to what types of event log datasets can process mining be applied?
Process mining requires event log datasets with at least the following three fields present in the data:
- Case ID, which is the ID of an object that goes through the different events.
- Activity ID, which defines the type of events that can happen to an object.
- Timestamp, which is the date and time of a specific event that happened to a given object.
Of course, there can be additional fields present in the dataset, as well, that can specify further information about the object. In what follows, I’ll explain how to represent conversational data as event log data.
Getting Conversational Data Ready for Process Mining
As part of the overall approach, I’m going to represent conversations as processes. Each process case or instance is, therefore, a specific conversation, and the intents of the different utterances in each conversation then become the different events or states of the process. Later on, I’ll show how this can be helpful in deriving insights from the conversations.
In order to make things more concrete, I’ll use an example of conversational data from the financial technology space. In the download accompanying this article you’ll find a CSV file containing 10 different simulated conversations of users with a chatbot that’s providing assistance regarding mortgages. Bear in mind that this conversational data is limited and contains only the necessary fields to which I’ll apply process mining. In addition, for simplicity, I’ve chosen to include only the user utterances, not the system responses. If you wanted, you could decide to include the system responses or any other data you think is related, such as information pertaining to the chat sessions, user data and so on.
The fields included in the CSV dataset are:
- ConversationID, which identifies the conversation in a unique way and is mapped here to the Case ID.
- Utterance, which is the user’s utterance and is essentially unstructured text data to which LUIS is applied to identify intents.
- TimeStamp, which is the timestamp for a given Conversation ID/Utterance pair. I’ll use this field later as my timestamp for process mining.
- Intent, which is an intent identified by LUIS. This field will be mapped as the Activity ID for process mining.
Note that while it’s possible to use LUIS to identify both intents and entities (references to real-world objects that appear in an utterance), I’ve chosen, for simplicity again, to focus on intents only. The following intents are included in the data:
- GreetingIntent: a greeting or conversation opener.
- ExplorationIntent: a general exploratory utterance made by the user.
- OperatorRequestIntent: a request by the user to speak with a human operator.
- SpecificQuestionIntent: a question from the user about mortgage rates.
- ContactInfoIntent: contact information provided by the user.
- PositiveFeedbackIntent: positive feedback provided by the user.
- NegativeFeedbackIntent: negative feedback provided by the user.
- EndConversationIntent: ending of the conversation with the bot initiated by the user.
I then identified certain intents for each utterance in the conversation shown in Figure 1 (in order): ExplorationIntent, SpecificQuestionIntent, PositiveFeedbackIntent, ContactInfoIntent and EndConversationIntent. In this way, the original conversational data is transformed into sequences of intents. The result of this transformation will be used as an input to Disco.
Figure 1 Conversation with ConversationId 3
|3||2018-04-02 14:03:02||Can you please give me quotes of mortgage rates|
|3||2018-04-02 14:03:05||I am interested in 15 years only for refinance|
|3||2018-04-02 14:03:08||Excellent, I would be interested to proceed|
|3||2018-04-02 14:04:12||My email is firstname.lastname@example.org|
|3||2018-04-02 14:04:15||Thank you, I will be waiting to hear back|
Applying Process Mining to Conversational Data Using Disco
To apply Disco, you’ll have to import the CSV file with the conversational data and map the three required fields described earlier—ConversationID, TimeStamp and Utterance. This is shown in Figure 2.
Figure 2 Importing Conversational Data into Disco
After importing the CSV, Disco will give you access to three views that I’ll go through one at a time: Map, Statistics and Cases. The first view, Map, is the map of the process discovered by Disco from the conversational data. The map is a graphical representation of the different transitions in the process between the events, as well as frequencies and repetitions of different events. In this case, those are the different transitions between the intents. Figure 3 shows the map discovered by Disco for the conversations.
Figure 3 Map View in Disco
From Figure 3, you can get a general overview of the conversations and see that conversations can start in one of three different ways—a greeting, an operator request or a mortgage-specific question, with mortgage-specific questions being very frequent. Most conversations end with an EndConversationIntent, but a few end with other intents that represent greetings and negative feedback. In particular with regard to negative feedback, these can point to outlier conversations that may require more attention.
Moreover, transitions between different intents can also provide very useful information for deriving intents. For example, it may be possible to determine whether there are specific utterances or intents that lead to the intent representing negative feedback. It might then be desirable to drive conversations away from that path.
As mentioned earlier, information about repetitions of both intents and transitions is readily available as part of the Map view. In particular, you can see that the two most common intents in this case are SpecificQuestionIntent and EndConversationIntent, and that transitions from the former to the latter are very common. This provides a good summary at a glance regarding the content of the conversations.
It can also present an opportunity to improve conversations by considering breaking down SpecificQuestionIntent and EndConversationIntent into finer grain intents that can capture more insightful aspects of the user interaction. This should be followed by retraining LUIS and repeating the application of process mining to the modified conversational data.
Now let’s take a look at the Statistics view, which is depicted in Figure 4. Here you can get insights about the duration of the conversations. Note that the conversations are now grouped into what Disco calls “Variants,” which are essentially similar conversation flows that have the same intents and transitions between the intents. The Overview part of the Statistics view allows you to get summary statistics, such as median and mean, as well as information about end-to-end durations of different conversations. This can be useful to identify outliers, such as extremely short conversations, and to cross check with conversations from the Map view regarding potentially problematic conversations. It’s also possible to identify conversations with longer durations. In the example I use here, those are likely to be successful conversations.
Figure 4 Overview and Summary Statistics in Disco
In order to dive deeper into conversations that exhibit interesting behaviors, for example, unusually long or short conversations or conversations with certain intent structures, you can use Disco’s powerful filtering capabilities. At any given point, Disco allows you to filter the overall dataset by various dimensions. As an example, you can identify the conversation IDs that are of specific interest, apply the appropriate filter and then look at the different views in a filtered mode. This allows you to identify patterns common to the filtered conversations.
Disco also lets you look at the conversations at the intent level (by using the Activity section of the Statistics view), specifying different summary statistics pertaining to the intents. This is depicted in Figure 5. Here you get a distribution of the different intents in your conversations and can see that, fortunately, the negative feedback intent comprises only about 3 percent of the intents in your conversations.
Figure 5 Summary Statistics About Intents in Disco
The final analysis view to cover is the Cases view, which shows the different conversations based on their variant types. For example, in Figure 6 you can see a specific conversation, with conversation ID 9, that belongs to a variant with two intents: SpecificQuestionIntent and EndConversationIntent. This view is very useful for comparing conversations having similar structures. It can help you to learn, for example, if there are any patterns you can adopt that would help make conversations more successful or, if you happen to find unexpected differences, it can help you discover what’s causing them.
Figure 6 The Cases View in Disco
Before wrapping up, I want to note that the insights and the features of Disco I’ve reviewed so far are not comprehensive, but rather demonstrate the benefits of applying process-mining techniques to conversations modeled as processes.
As mentioned earlier, you can explore using many additional fields as part of your activity representation. You may want to include, for example, information about specific entities in user utterances; the responses of your conversational interface; or data about your users, such as locations, previous interactions with the system, and so on. Such rich representations will enable you to enhance the depth of insights from your conversational data and build better conversational systems. I strongly encourage you to explore more.
I’ve introduced process mining and have shown how to leverage that technology in conjunction with LUIS to derive insights from conversational data. In particular, LUIS is applied to the different utterances in the conversations to transform unstructured utterance text to structured intent labels. Then, through mapping of conversation ID, time stamps and intents to process-mining fields, and by using Disco, I showed how to apply process mining to the structured conversational data. Using the different views provided by Disco, from process discovery that shows overall conversation structure to grouping of conversations into different variants, it’s possible to derive insights from the transformed conversational data, such as what makes conversation successful and how to use that knowledge to improve conversations that are less successful.
Keep in mind that this article just scratches the surface of what’s possible when applying process mining to conversational data. I hope that using the resources presented in this article, along with others you may find along the way, will enable you to leverage process mining to create better, more compelling conversational interfaces.
Zvi Topol has been working as a data scientist in various industry verticals, including marketing analytics, media and entertainment, and Industrial Internet of Things. He has delivered and lead multiple machine learning and analytics projects including natural language and voice interfaces, cognitive search, video analysis, recommender systems and marketing decision support systems. Topol is currently with MuyVentive, an advanced analytics R&D company, and can be reached at email@example.com.
Thanks to the following Microsoft technical expert who reviewed this article: Sandeep Alur