Application Model. Conversational Agent Namespace
Provides applications the ability to expose functionality through any digital assistant supported by the Windows Conversational Agent platform.
Provides the configuration details for a single signal supported by an activation signal detector. For example, the keyword "Hey Cortana" in US English.
Represents hardware and software components that can generate activation signals based on input from a user's environment, such as spoken keyword(s), sound detection, or button press.
Provides access to existing signal detector and configuration definitions of a digital assistant.
The communication channel between the digital assistant and the Windows Conversational Agent platform.
Provides event data for the SessionInterrupted event.
A signal detected by an agent that corresponds to an ActivationSignalDetectionConfiguration. This signal indicates that the matching agent should be activated to handle an interaction.
Provides event data for the SignalDetected event.
Provides event data for the SystemStateChanged event.
Provides event data for the ActivationSignalDetectionConfiguration.AvailabilityChanged event.
Provides availability details for the ActivationSignalDetector.
Specifies the activation signal training data formats supported by the ActivationSignalDetector for the digital assistant.
Specifies the supported ActivationSignalDetector types.
Specifies the power modes, supported by an ActivationSignalDetector, that describe the power-related conditions under which a detector is allowed to operate.
Specifies each possible response for a ConversationalAgentSession update.
Specifies each possible AgentState for a digital assistant.
Specifies the possible state changes for the SystemStateChanged event.
Specifies the voice training data states recognized by the ActivationSignalDetector for the digital assistant.
These determinations are made by the training algorithms of an individual signal detector and may be specific to the hardware or software implementations of the detector.
Users can enable a platform-level detection signal for a conversational agent in Settings. This signal can include a keyword utterance, Bluetooth transmission, system keyboard accelerator, in-app speech recognition, or other sounds (door slam, smoke detector). For example, the "Hey Cortana" keyword that begins a voice interaction with Cortana.
Platform-level signal detectors act as a "first-pass" filter and can result in too many unintended activations. For this reason, we recommend that you consider additional verification of an activation signal, such as using a more stringent keyword detector from the context of the agent application.
If a ConversationalAgentSignal is detected while the application is not running, or is not able to respond to the ConversationalAgentSession.SignalDetected event, the application is activated in the background using a task registered with a ConversationalAgentTrigger.
If a ConversationalAgentSignal is detected while the application is able to respond to a ConversationalAgentSession.SignalDetected event (by calling ConversationalAgentSession.RequestAgentStateChangeAsync ), no background activation occurs, as the signal has already been handled.
If a ConversationalAgentSignal is detected for a conversational agent while an interruptible session (see RequestInterruptableAsync ) is already active, the session will receive a ConversationalAgentSession.SessionInterrupted event to indicate that a new signal event has been raised.
Some digital assistant sessions cannot be interrupted by another signal. For example, Cortana requires the user to issue a cancel or stop command to end the current session (the user cannot be in a Cortana session and issue commands to Alexa).