Gesture Recognizer Class
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Interprets user interactions from hands, motion controllers, and system voice commands to surface spatial gesture events, which users target using their gaze or a motion controller's pointing ray.
public ref class SpatialGestureRecognizer sealed
/// [Windows.Foundation.Metadata.Activatable(Windows.UI.Input.Spatial.ISpatialGestureRecognizerFactory, 131072, "Windows.Foundation.UniversalApiContract")] /// [Windows.Foundation.Metadata.ContractVersion(Windows.Foundation.UniversalApiContract, 131072)] /// [Windows.Foundation.Metadata.MarshalingBehavior(Windows.Foundation.Metadata.MarshalingType.Agile)] /// [Windows.Foundation.Metadata.Threading(Windows.Foundation.Metadata.ThreadingModel.Both)] class SpatialGestureRecognizer final
[Windows.Foundation.Metadata.Activatable(typeof(Windows.UI.Input.Spatial.ISpatialGestureRecognizerFactory), 131072, "Windows.Foundation.UniversalApiContract")] [Windows.Foundation.Metadata.ContractVersion(typeof(Windows.Foundation.UniversalApiContract), 131072)] [Windows.Foundation.Metadata.MarshalingBehavior(Windows.Foundation.Metadata.MarshalingType.Agile)] [Windows.Foundation.Metadata.Threading(Windows.Foundation.Metadata.ThreadingModel.Both)] public sealed class SpatialGestureRecognizer
Public NotInheritable Class SpatialGestureRecognizer
Windows 10 (introduced in 10.0.10586.0 - for Xbox, see UWP features that aren't yet supported on Xbox)
Windows.Foundation.UniversalApiContract (introduced in v2.0)
Spatial gestures are a key form of input for Mixed Reality headsets such as HoloLens. By routing interactions from the SpatialInteractionManager to a hologram's SpatialGestureRecognizer, apps can detect Tap, Hold, Manipulation, and Navigation events uniformly across hands, voice, and motion controllers.
Note that spatial gestures are not detected for input from gamepads, keyboards or mice.
SpatialGestureRecognizer performs only the minimal disambiguation between the set of gestures that you request. For example, if you request just Tap, the user may hold their finger down as long as they like and a Tap will still occur. If you request both Tap and Hold, after about a second of holding down their finger, the gesture will promote to a Hold and a Tap will no longer occur.
To use SpatialGestureRecognizer, handle the SpatialInteractionManager's InteractionDetected event and grab the SpatialPointerPose exposed there. Use the user's gaze ray from this pose to intersect with the holograms and surface meshes in the user's surroundings, in order to determine what the user is intending to interact with. Then, route the SpatialInteraction in the event arguments to the target hologram's SpatialGestureRecognizer, using its CaptureInteraction method. This starts interpreting that interaction according to the SpatialGestureSettings set on that recognizer at creation time or by TrySetGestureSettings.
When targeting a spatial interaction, such as a hand gesture, motion controller press or voice interaction, apps should choose a pointing ray available from the interaction's SpatialPointerPose, based on the nature of the interaction's SpatialInteractionSource:
- If the interaction source does not support pointing (@"Windows.UI.Input.Spatial.SpatialInteractionSource.IsPointingSupported?text=IsPointingSupported" is false), the app should target based on the user's gaze, available through the Head property.
- If the interaction source does support pointing (@"Windows.UI.Input.Spatial.SpatialInteractionSource.IsPointingSupported?text=IsPointingSupported" is true), the app may instead target based on the source's pointer pose, available through the TryGetInteractionSourcePose method.
The app should then intersect the chosen pointing ray with its own holograms or with the spatial mapping mesh to render cursors and determine what the user is intending to interact with.
For applications using the gaze-and-commit input model, particularly on HoloLens (first gen), SpatialGestureRecognizer can be used to to enable composite gestures built on top of the 'select' event. By routing interactions from the SpatialInteractionManager to a hologram's SpatialGestureRecognizer, apps can detect Tap, Hold, Manipulation, and Navigation events uniformly across hands, voice, and spatial input devices, without having to handle presses and releases manually.
Initializes a new SpatialGestureRecognizer with the specified gesture settings.
Gets the current SpatialGestureSettings for this recognizer.
Cancels all in-progress gestures and abandons any captured interactions.
Track all input events that occur as part of the specified interaction.
Attempts to change the gesture settings for this recognizer.
Occurs when a Hold gesture is canceled.
Occurs when a Hold gesture completes.
Occurs when an interaction becomes a Hold gesture.
Occurs when a Manipulation gesture is canceled.
Occurs when a Manipulation gesture is completed.
Occurs when an interaction becomes a Manipulation gesture.
Occurs when a Manipulation gesture is updated due to hand movement.
Occurs when a Navigation gesture is canceled.
Occurs when a Navigation gesture is completed.
Occurs when an interaction becomes a Navigation gesture.
Occurs when a Navigation gesture is updated due to hand or motion controller movement.
Occurs when gesture recognition ends, due to completion or cancellation of a gesture (this is the last event to fire).
Occurs when gesture recognition begins (this is the first event to fire).
Occurs when a Tap or DoubleTap gesture is recognized.