Introduction to adaptive dialogs


Adaptive dialogs offer a new event-based addition to the Dialogs library that enables you to easily layer in sophisticated conversation management techniques like handling interruptions, dispatching, and more.


Adaptive dialogs is currently only available in the .NET version of the Bot Framework SDK. You can find sample bots built using adaptive dialogs in the BotBuilder-Samples repository on GitHub.


  • An understanding of dialogs in the Bot Framework V4 SDK
  • A general understanding of prompts in the Bot Framework V4 SDK

Adaptive dialogs defined

Why adaptive dialogs

Adaptive dialogs have many advantages to WaterfallDialogs. Primarily, they:

  • Provide flexibility that enables you to dynamically update conversation flow based on context and events. This is especially handy when dealing with conversation context switches and interruptions in the middle of a conversation.
  • Support and sit on top of a rich event system for dialogs, so modeling interruptions, cancellation, and execution planning semantics are a lot easier to describe and manage.
  • Bring input recognition and rule-based event handling
  • Combine the conversation model (dialog) and output generation into one cohesive, self-contained unit.
  • Support extensibility points for recognition, event rules and machine learning.
  • Was designed to be declarative from the start. This enables tooling including products like Bot Framework Composer that provides a visual canvas to model conversations.

Anatomy of an adaptive dialog



All adaptive dialogs contain a list of one or more event handlers called triggers, and each trigger contains an optional Condition and a list of one or more actions. Triggers are what enable you to catch and respond to events, if its Condition is met its Actions will execute and once an event is handled no further action is taken on that event. If the triggers Condition is not met, the event is passed to the next event handler for evaluation.

See the Events and triggers in adaptive dialogs article for more information on triggers in adaptive dialogs.


Actions define the conversation flow when a specific event is captured via a Trigger. Unlike a waterfall dialog where each step is a function, each action in an Adaptive dialog is in itself a dialog. This makes adaptive dialogs both powerful and flexible and enables adaptive dialogs to easily handle interruptions and branch conditionally based on context or current state.

The Bot Framework SDK provides many built in actions to enable you to perform various actions such as memory manipulation, dialog management, and controlling the conversational flow of your bot. Since actions are in fact dialogs, they are extensible, making it possible to create your own custom actions.

See the Actions in adaptive dialogs article for more information on actions in adaptive dialogs.


Inputs are to adaptive dialogs what prompts are to the base dialog class. Inputs are specialized actions that you can use in an adaptive dialog to request and validate information from a user, then if the validation passes, accept the input into memory. All input classes in the Bot Framework SDK are designed to do the following:

  • Perform existential checks before prompting, to avoid prompting for information the bot already has.
  • Save the input to the specified property if it matches the type of entity expected.
  • Accept constraints - min, max, etc.

See the Asking for user input using adaptive dialogs article for more information on inputs in adaptive dialogs.


Recognizers enable your bot to understand and extract meaningful pieces of information from user's input. All recognizers emit events such as the recognizedIntent event that fires when the recognizer picks up an intent (or extracts entities) from a user utterance. You are not required to use recognizers with an adaptive dialog, but if you don't no recognizedIntent events will ever fire, instead the unknownIntent event will fire.

See the Recognizers in adaptive dialogs article for more information on recognizers in adaptive dialogs.


Generator ties a specific language generation system to an adaptive dialog. This, along with the recognizer enables clean separation and encapsulation of a specific dialog's Language Understanding and Language Generation assets. With the Language Generation feature, you can associate the generator with a .lg file or set the generator to a TemplateEngine instance where you explicitly manage the one or more .lg files that power the adaptive dialog.

See the Language Generation in adaptive dialogs article for more information on generators in adaptive dialogs.

Memory scopes and managing state

Adaptive dialogs provide a way to access and manage memory. All adaptive dialogs by default use this model so all components that consume or contribute to memory have a common method to read and write information in the appropriate scope. All properties in all scopes are property bags which gives you the ability to dynamically modify what properties are stored.

See the Memory scopes and managing state in adaptive dialogs article for more information on Memory scopes and managing state in adaptive dialogs.

Declarative assets

Adaptive dialogs enable you to define your dialog as a class by creating a new AdaptiveDialog object and defining your triggers and actions in the classes source file, but you can also create your dialog using a declarative approach where you define all the attributes of your dialog in a JSON file with a file extension of .dialog. No source code is required to define your dialogs and you can have multiple dialogs using both approaches in the same bot. At runtime your bot will generate and execute the dialog code as defined in these declarative dialog files.

See the Using declarative assets article for more information on using declarative assets in adaptive dialogs.

Tying it all together

The adaptive dialog runtime behavior

The following fictitious Travel agent bot will help illustrate the runtime behavior of adaptive dialogs. A real world application would have multiple capabilities like the ability to search for and book flights, hotel rooms, cars and even check the weather and each of these would be handled in their own specialized dialog.

What happens when the user does something unexpected while in the middle of a conversation with your bot?

Consider this scenario:

    User: I'd like to book a flight
    Bot:  Sure. What is your destination city?
    User: How's the weather in Seattle?
    Bot:  Its 72 and sunny in Seattle

The user not only did not answer the question, but they changed the subject entirely, which will require completely different code (Action) that exists in a different dialog to execute. Adaptive dialogs enables you to handle this scenario which is shown in the following diagram:


This bot has the following three adaptive dialogs:

  1. The rootDialog that has its own 'LUIS' model and a set of triggers and actions, some of which will call a child dialog that is designed to handle specific user requests.
  2. The bookFlightDialog that has its own 'LUIS' model and a set of triggers and actions that handles conversations about booking flights.
  3. The weatherDialog that has its own 'LUIS' model and a set of triggers and actions that handles conversations about getting weather information.

Here's the flow when user says: I'd like to book a flight


The active dialog (rootDialog) recognizer emits a recognizedIntent event that you can handle with an OnIntent trigger. In this case the user said "I'd like to book a flight" which matches an intent defined in rootDialog and causes the OnIntent trigger contains a BeginDialog action to execute, which calls the dialog bookFlightDialog. The book a flight dialog executes its actions, one of them is asking for the city you want to fly to.

The user can provide anything in response, in some cases the response may have nothing to do with the question that was asked and in this case the user responded with How's the weather in Seattle?


Since the dialog bookFlightDialog has no OnIntent trigger to handle the users request, the bot propagates the handling of this user input up the conversation stack, up through all the calling dialogs all the way to the root dialog, which in this case is up just one dialog, and since rootDialog has an OnIntent trigger to handle the weather intent, this trigger executes its BeginDialog action that calls the dialog weatherDialog, passing along the users question. Once weatherDialog finishes by responding to the users question, the bot returns control back to the originating dialog and the conversation flow continues where it left off prior to this interruption, and prompts the user again for the destination city.

To summarize:

Each dialog's recognizer analyzes the user's input to determine the user intent. Once the intent is determined, the recognizer emits an IntentRecognized event which the dialog handles using an OnIntent trigger. If there is no OnIntent trigger in the active dialog that can handle that intent, the bot will send it to the dialog's parent dialog. If the parent dialog does not have a trigger to handle the intent it bubbles up until it reaches the root dialog. Once the trigger that handles that intent completes, it sends control back to the dialog that started this process where it can continue the conversational flow where it left off.

Additional information

Adaptive concepts

How to develop a bot using adaptive dialogs