GestureRecognizer GestureRecognizer GestureRecognizer GestureRecognizer GestureRecognizer Class

Definition

Provides gesture and manipulation recognition, event listeners, and settings.

public : sealed class GestureRecognizer : IGestureRecognizer
struct winrt::Windows::UI::Input::GestureRecognizer : IGestureRecognizer
public sealed class GestureRecognizer : IGestureRecognizer
Public NotInheritable Class GestureRecognizer Implements IGestureRecognizer
var gestureRecognizer = new gestureRecognizer();
Attributes
Windows 10 requirements
Device family
Windows 10 (introduced v10.0.10240.0)
API contract
Windows.Foundation.UniversalApiContract (introduced v1)

Examples

Here we set up a GestureRecognizer object with a collection of input event handlers for processing various pointer and gestures. For more information on how to listen to and handle Windows Runtime events, see .

Note

Use the target or currentTarget property of the event object instead of the GestureRecognizer object in the event handler.


// inputManager handles gesture recognition for this sample.
function inputManager(target) {
    // Initialize the gesture recognizer.
    var gr = new Windows.UI.Input.GestureRecognizer();

    // Turn off visual feedback for gestures.
    // Visual feedback for pointer input is still displayed. 
    gr.showGestureFeedback = false;

    // Configure gesture recognizer to process the following:
    // double tap               - start questions and timer.
    // tap                      - move to next question.
    // right tap                - show answer.
    // hold and hold with mouse - start clues.
    gr.gestureSettings =
        Windows.UI.Input.GestureSettings.tap |
        Windows.UI.Input.GestureSettings.doubleTap |
        Windows.UI.Input.GestureSettings.rightTap |
        Windows.UI.Input.GestureSettings.hold |
        Windows.UI.Input.GestureSettings.holdWithMouse;

    // Register event listeners for these gestures.
    gr.addEventListener('tapped', tappedHandler);
    gr.addEventListener("holding", holdingHandler);
    gr.addEventListener("righttapped", rightTappedHandler);

    // Register event listeners for DOM pointer events.
    // The event pointer(s) are passed to the gesture recognizer
    // for further processing.
    target.addEventListener('MSPointerDown', pointerDown, false);
    target.addEventListener('MSPointerMove', pointerMove, false);
    target.addEventListener('MSPointerUp', pointerUp, false);
    target.addEventListener("MSPointerOver", pointerOver, true);
    target.addEventListener("MSPointerOut", pointerOut, true);
}

// Handle the pointer move event.
// The holding gesture is routed through this event.
// If pointer move is not handled, holding will not fire.
function pointerMove(evt) {
   _eventLog.innerText += "pointer move || ";

   // Get intermediate PointerPoints
   var pps = evt.intermediatePoints;

   // Pass the array of PointerPoints to the gesture recognizer.
   gr.processMoveEvents(pps);
};

// The pointer move event must also be handled because the 
// holding gesture is routed through this event.
// If pointer move is not handled, holding will not fire.
// A holding event is fired approximately one second after 
// a pointer down if no subsequent movement is detected.
function holdingHandler(evt) {
    if (evt.holdingState == Windows.UI.Input.HoldingState.started) {
        _eventLog.innerText += "holding || ";
    } else if (evt.holdingState == Windows.UI.Input.HoldingState.completed) {
        _eventLog.innerText += "holding completed || ";
    } else {
        _eventLog.innerText += "holding canceled || ";
    }
}

namespace GesturesApp.Manipulations
{
    /// <summary>
    /// Thin wrapper around the <see cref="Windows.UI.Input.GestureRecognizer"/>, routes pointer events received by
    /// the manipulation target to the gesture recognizer.
    /// </summary>
    /// <remarks>
    /// Transformations during manipulations cannot be expressed in the coordinate space of the manipulation target.
    /// Instead they need be expressed with respect to a reference coordinate space, usually an ancestor (in the UI tree)
    /// of the element being manipulated.
    /// </remarks>
    public abstract class InputProcessor
    {
        protected Windows.UI.Input.GestureRecognizer _gestureRecognizer;

        // Element being manipulated
        protected Windows.UI.Xaml.FrameworkElement _target;
        public Windows.UI.Xaml.FrameworkElement Target
        {
            get { return _target; }
        }

        // Reference element that contains the coordinate space used for expressing transformations 
        // during manipulation, usually the parent element of Target in the UI tree
        protected Windows.UI.Xaml.Controls.Canvas _reference;
        public Windows.UI.Xaml.FrameworkElement Reference
        {
            get { return _reference; }
        }

        /// <summary>
        /// Constructor.
        /// </summary>
        /// <param name="element">
        /// Manipulation target.
        /// </param>
        /// <param name="reference">
        /// Element that contains the coordinate space used for expressing transformations
        /// during manipulations, usually the parent element of Target in the UI tree.
        /// </param>
        /// <remarks>
        /// Transformations during manipulations cannot be expressed in the coordinate space of the manipulation target.
        /// Thus <paramref name="element"/> and <paramref name="reference"/> must be different. Usually <paramref name="reference"/>
        /// will be an ancestor of <paramref name="element"/> in the UI tree.
        /// </remarks>
        internal InputProcessor(Windows.UI.Xaml.FrameworkElement element, Windows.UI.Xaml.Controls.Canvas reference)
        {
            this._target = element;
            this._reference = reference;

            // Setup pointer event handlers for the element.
            // They are used to feed the gesture recognizer.    
            this._target.PointerCanceled += OnPointerCanceled;
            this._target.PointerMoved += OnPointerMoved;
            this._target.PointerPressed += OnPointerPressed;
            this._target.PointerReleased += OnPointerReleased;
            this._target.PointerWheelChanged += OnPointerWheelChanged;

            // Create the gesture recognizer
            this._gestureRecognizer = new Windows.UI.Input.GestureRecognizer();
            this._gestureRecognizer.GestureSettings = Windows.UI.Input.GestureSettings.None;
        }

        private void OnPointerMoved(object sender, Windows.UI.Xaml.Input.PointerRoutedEventArgs args)
        {
            // Route the events to the gesture recognizer.
            // All intermediate points are passed to the gesture recognizer in
            // the coordinate system of the reference element.
            this._gestureRecognizer.ProcessMoveEvents(args.GetIntermediatePoints(this._reference));

            // Mark event handled, to prevent execution of default event handlers
            args.Handled = true;
        }
    }
}

Remove the event handler and set the gesture recognizer to null if it is no longer needed:


gr.removeEventListener('MSPointerDown', pointerDown);
gr = null;
this._gestureRecognizer = null;

Remarks

You can create a gesture object for each appropriate element when your app starts. However, this approach might not scale well depending on the number of gesture objects you need to create (for example, a jigsaw puzzle with hundreds of pieces).

In this case, you can create gesture objects dynamically on a pointerdown event and destroy them on an MSGestureEnd event. This approach scales well, but does incur some overhead due to creating and releasing these objects.

Alternatively, you can statically allocate and dynamically manage a pool of reusable gesture objects.

Note

: This class is not agile, which means that you need to consider its threading model and marshaling behavior. For more info, see Threading and Marshaling (C++/CX) and Using Windows Runtime objects in a multithreaded environment (.NET).

For more detail on how to use cross-slide functionality, see Guidelines for cross-slide. The threshold distances used by the cross-slide interaction are shown in the following diagram.

Screen shot showing the select and drag and drop processes.

The PivotRadius and PivotCenter properties are used only when single pointer input is detected. They have no effect on multiple pointer input. The value for these properties should be updated regularly during the interaction.

Rotation is supported by a GestureRecognizer only when manipulationRotate is set through the GestureSettings property.

Rotation is not supported for single pointer input if the value of PivotRadius is set to 0.

Constructors

GestureRecognizer() GestureRecognizer() GestureRecognizer() GestureRecognizer() GestureRecognizer()

Initializes a new instance of a GestureRecognizer object.

Properties

AutoProcessInertia AutoProcessInertia AutoProcessInertia AutoProcessInertia AutoProcessInertia

Gets or sets a value that indicates whether manipulations during inertia are generated automatically.

CrossSlideExact CrossSlideExact CrossSlideExact CrossSlideExact CrossSlideExact

Gets or sets a value that indicates whether the exact distance from initial contact to end of the cross-slide interaction is reported.By default, a small distance threshold is subtracted from the first position reported by the system for cross-slide interactions. If this flag is set, the distance threshold is not subtracted from the initial position.

Note

This distance threshold is intended to account for any slight movement of the contact after initial detection. It helps the system differentiate between cross-sliding and panning, and helps ensure that a tap gesture is not interpreted as either.

CrossSlideHorizontally CrossSlideHorizontally CrossSlideHorizontally CrossSlideHorizontally CrossSlideHorizontally

Gets or sets a value that indicates whether the cross-slide axis is horizontal.

CrossSlideThresholds CrossSlideThresholds CrossSlideThresholds CrossSlideThresholds CrossSlideThresholds

Gets or sets values that indicate the distance thresholds for a CrossSliding interaction.

GestureSettings GestureSettings GestureSettings GestureSettings GestureSettings

Gets or sets a value that indicates the gesture and manipulation settings supported by an application.

InertiaExpansion InertiaExpansion InertiaExpansion InertiaExpansion InertiaExpansion

Gets or sets a value that indicates the relative change in size of an object from the start of inertia to the end of inertia (when resizing, or scaling, is complete).

InertiaExpansionDeceleration InertiaExpansionDeceleration InertiaExpansionDeceleration InertiaExpansionDeceleration InertiaExpansionDeceleration

Gets or sets a value that indicates the rate of deceleration from the start of inertia to the end of inertia (when the resizing, or expansion, manipulation is complete).

InertiaRotationAngle InertiaRotationAngle InertiaRotationAngle InertiaRotationAngle InertiaRotationAngle

Gets or sets a value that indicates the final angle of rotation of an object at the end of inertia (when the rotation manipulation is complete).

InertiaRotationDeceleration InertiaRotationDeceleration InertiaRotationDeceleration InertiaRotationDeceleration InertiaRotationDeceleration

Gets or sets a value that indicates the rate of deceleration from the start of inertia to the end of inertia (when the rotation manipulation is complete).

InertiaTranslationDeceleration InertiaTranslationDeceleration InertiaTranslationDeceleration InertiaTranslationDeceleration InertiaTranslationDeceleration

Gets or sets a value that indicates the rate of deceleration from the start of inertia to the end of inertia (when the translation manipulation is complete).

InertiaTranslationDisplacement InertiaTranslationDisplacement InertiaTranslationDisplacement InertiaTranslationDisplacement InertiaTranslationDisplacement

Gets or sets a value that indicates the relative change in the screen location of an object from the start of inertia to the end of inertia (when the translation manipulation is complete).

IsActive IsActive IsActive IsActive IsActive

Gets a value that indicates whether an interaction is being processed.

IsInertial IsInertial IsInertial IsInertial IsInertial

Gets a value that indicates whether a manipulation is still being processed during inertia (no input points are active).

ManipulationExact ManipulationExact ManipulationExact ManipulationExact ManipulationExact

Gets or sets a value that indicates whether the exact distance from initial contact to end of the interaction is reported.By default, a small distance threshold is subtracted from the first delta reported by the system. This distance threshold is intended to account for slight movements of the contact when processing a tap gesture. If this flag is set, the distance threshold is not subtracted from the first delta.

MouseWheelParameters MouseWheelParameters MouseWheelParameters MouseWheelParameters MouseWheelParameters

Gets a set of properties that are associated with the wheel button of a mouse device.

PivotCenter PivotCenter PivotCenter PivotCenter PivotCenter

Gets or sets the center point for a rotation interaction when single pointer input is detected.

PivotRadius PivotRadius PivotRadius PivotRadius PivotRadius

Gets or sets the radius, from the PivotCenter to the pointer input, for a rotation interaction when single pointer input is detected.

ShowGestureFeedback ShowGestureFeedback ShowGestureFeedback ShowGestureFeedback ShowGestureFeedback

Gets or sets a value that indicates whether visual feedback is displayed during an interaction.

Methods

CanBeDoubleTap(PointerPoint) CanBeDoubleTap(PointerPoint) CanBeDoubleTap(PointerPoint) CanBeDoubleTap(PointerPoint) CanBeDoubleTap(PointerPoint)

Identifies whether a tap can still be interpreted as the second tap of a double tap gesture.

CompleteGesture() CompleteGesture() CompleteGesture() CompleteGesture() CompleteGesture()

Causes the gesture recognizer to finalize an interaction.

ProcessDownEvent(PointerPoint) ProcessDownEvent(PointerPoint) ProcessDownEvent(PointerPoint) ProcessDownEvent(PointerPoint) ProcessDownEvent(PointerPoint)

Processes pointer input and raises the GestureRecognizer events appropriate to a pointer down action for the gestures and manipulations specified by the GestureSettings property.

ProcessInertia() ProcessInertia() ProcessInertia() ProcessInertia() ProcessInertia()

Performs inertia calculations and raises the various inertia events.

ProcessMouseWheelEvent(PointerPoint, Boolean, Boolean) ProcessMouseWheelEvent(PointerPoint, Boolean, Boolean) ProcessMouseWheelEvent(PointerPoint, Boolean, Boolean) ProcessMouseWheelEvent(PointerPoint, Boolean, Boolean) ProcessMouseWheelEvent(PointerPoint, Boolean, Boolean)

Processes pointer input and raises the GestureRecognizer events appropriate to a mouse wheel action for the gestures and manipulations specified by the GestureSettings property.

ProcessMoveEvents(IVector<PointerPoint>) ProcessMoveEvents(IVector<PointerPoint>) ProcessMoveEvents(IVector<PointerPoint>) ProcessMoveEvents(IVector<PointerPoint>) ProcessMoveEvents(IVector<PointerPoint>)

Processes pointer input and raises the GestureRecognizer events appropriate to a pointer move action for the gestures and manipulations specified by the GestureSettings property.

ProcessUpEvent(PointerPoint) ProcessUpEvent(PointerPoint) ProcessUpEvent(PointerPoint) ProcessUpEvent(PointerPoint) ProcessUpEvent(PointerPoint)

Processes pointer input and raises the GestureRecognizer events appropriate to a pointer up action for the gestures and manipulations specified by the GestureSettings property.

Events

CrossSliding CrossSliding CrossSliding CrossSliding CrossSliding

Occurs when a user performs a slide or swipe gesture (through a single touch contact) within a content area that supports panning along a single axis only. The gesture must occur in a direction that is perpendicular to this panning axis.

Note

A swipe is a short sliding gesture that results in a selection action while the longer slide gesture crosses a distance threshold and results in a rearrange action. The swipe and slide gestures are demonstrated in the following diagram. Diagram showing the select and drag actions.

Dragging Dragging Dragging Dragging Dragging

Occurs when a user performs a slide or swipe gesture with a mouse or pen/stylus (single contact).

Holding Holding Holding Holding Holding

Occurs when a user performs a press and hold gesture (with a single touch, mouse, or pen/stylus contact).

ManipulationCompleted ManipulationCompleted ManipulationCompleted ManipulationCompleted ManipulationCompleted

Occurs when the input points are lifted and all subsequent motion (translation, expansion, or rotation) through inertia has ended.

ManipulationInertiaStarting ManipulationInertiaStarting ManipulationInertiaStarting ManipulationInertiaStarting ManipulationInertiaStarting

Occurs when all contact points are lifted during a manipulation and the velocity of the manipulation is significant enough to initiate inertia behavior (translation, expansion, or rotation continue after the input pointers are lifted).

ManipulationStarted ManipulationStarted ManipulationStarted ManipulationStarted ManipulationStarted

Occurs when one or more input points have been initiated and subsequent motion (translation, expansion, or rotation) has begun.

ManipulationUpdated ManipulationUpdated ManipulationUpdated ManipulationUpdated ManipulationUpdated

Occurs after one or more input points have been initiated and subsequent motion (translation, expansion, or rotation) is under way.

RightTapped RightTapped RightTapped RightTapped RightTapped

Occurs when the pointer input is interpreted as a right-tap gesture, regardless of input device.

  • Right mouse button click
  • Pen barrel button click
  • Touch or pen press and hold
Tapped Tapped Tapped Tapped Tapped

Occurs when the pointer input is interpreted as a tap gesture.

See Also