Unity 中的手势Gestures in Unity

您可以通过两种主要方式在您的 HMD 中进行操作,在 HoloLens 和沉浸式的中的 手势运动控制器There are two key ways to take action on your gaze in Unity, hand gestures and motion controllers in HoloLens and Immersive HMD. 可以通过 Unity 中的相同 Api 访问空间输入的两个源的数据。You access the data for both sources of spatial input through the same APIs in Unity.

Unity 提供了两种主要方法来访问 Windows Mixed Reality 的空间输入数据。Unity provides two primary ways to access spatial input data for Windows Mixed Reality. 常见的 GetButton/GetAxis api 跨多个 Unity XR sdk 工作,而特定于 Windows Mixed Reality 的 InteractionManager/GestureRecognizer api 会公开一组完整的空间输入数据。The common Input.GetButton/Input.GetAxis APIs work across multiple Unity XR SDKs, while the InteractionManager/GestureRecognizer API specific to Windows Mixed Reality exposes the full set of spatial input data.

高级别复合手势 Api (GestureRecognizer) High-level composite gesture APIs (GestureRecognizer)

命名空间: UnityEngine. XRNamespace: UnityEngine.XR.WSA.Input
类型GestureRecognizerGestureSettingsInteractionSourceKindTypes: GestureRecognizer, GestureSettings, InteractionSourceKind

您的应用程序还可以识别空间输入源、点击、保持、操作和导航笔势的更高级复合手势。Your app can also recognize higher-level composite gestures for spatial input sources, Tap, Hold, Manipulation, and Navigation gestures. 您可以使用 GestureRecognizer 在 运动控制器 中识别这些组合手势。You can recognize these composite gestures across both hands and motion controllers using the GestureRecognizer.

GestureRecognizer 上的每个手势事件都提供输入的 SourceKind,以及事件发生时的目标 head 射线。Each Gesture event on the GestureRecognizer provides the SourceKind for the input as well as the targeting head ray at the time of the event. 某些事件提供其他特定于上下文的信息。Some events provide additional context-specific information.

使用手势识别器捕获手势只需几个步骤:There are only a few steps required to capture gestures using a Gesture Recognizer:

  1. 创建新的手势识别器Create a new Gesture Recognizer
  2. 指定要监视的手势Specify which gestures to watch for
  3. 订阅这些手势的事件Subscribe to events for those gestures
  4. 开始捕获手势Start capturing gestures

创建新的手势识别器Create a new Gesture Recognizer

若要使用 GestureRecognizer,必须创建 GestureRecognizerTo use the GestureRecognizer, you must have created a GestureRecognizer:

GestureRecognizer recognizer = new GestureRecognizer();

指定要监视的手势Specify which gestures to watch for

通过 SetRecognizableGestures 指定你对其感兴趣的手势 ( # B1Specify which gestures you're interested in via SetRecognizableGestures():

recognizer.SetRecognizableGestures(GestureSettings.Tap | GestureSettings.Hold);

订阅这些手势的事件Subscribe to events for those gestures

订阅你感兴趣的手势的事件。Subscribe to events for the gestures you're interested in.

void Start()
{
    recognizer.Tapped += GestureRecognizer_Tapped;
    recognizer.HoldStarted += GestureRecognizer_HoldStarted;
    recognizer.HoldCompleted += GestureRecognizer_HoldCompleted;
    recognizer.HoldCanceled += GestureRecognizer_HoldCanceled;
}

备注

导航和操作手势在 GestureRecognizer 的实例上互相排斥。Navigation and Manipulation gestures are mutually exclusive on an instance of a GestureRecognizer.

开始捕获手势Start capturing gestures

默认情况下,在调用 StartCapturingGestures ( # B1 之前, GestureRecognizer 不监视输入。By default, a GestureRecognizer doesn't monitor input until StartCapturingGestures() is called. 如果在处理 StopCapturingGestures ( # B3 的帧之前执行了输入,则可能会在 StopCapturingGestures ( # B1 之后生成笔势事件。It's possible that a gesture event may be generated after StopCapturingGestures() is called if input was performed before the frame where StopCapturingGestures() was processed. GestureRecognizer 将记得在上一帧实际发生的情况下,它是处于打开还是关闭状态,因此,根据此帧的注视目标启动和停止手势监视是可靠的。The GestureRecognizer will remember whether it was on or off during the previous frame in which the gesture actually occurred, and so it's reliable to start and stop gesture monitoring based on this frame's gaze targeting.

recognizer.StartCapturingGestures();

停止捕获手势Stop capturing gestures

停止手势识别:To stop gesture recognition:

recognizer.StopCapturingGestures();

删除手势识别器Removing a gesture recognizer

请记住,在销毁 GestureRecognizer 对象之前,取消订阅已订阅的事件。Remember to unsubscribe from subscribed events before destroying a GestureRecognizer object.

void OnDestroy()
{
    recognizer.Tapped -= GestureRecognizer_Tapped;
    recognizer.HoldStarted -= GestureRecognizer_HoldStarted;
    recognizer.HoldCompleted -= GestureRecognizer_HoldCompleted;
    recognizer.HoldCanceled -= GestureRecognizer_HoldCanceled;
}

在 Unity 中呈现运动控制器模型Rendering the motion controller model in Unity

运动控制器模型和 teleportationMotion Controller model and teleportation
运动控制器模型和 teleportationMotion controller model and teleportation

若要在应用中渲染与用户所持有的物理控制器相匹配的运动控制器,并在按下各种按钮时进行解释,可以在 混合现实工具包中使用 MotionController prefabTo render motion controllers in your app that match the physical controllers your users are holding and articulate as various buttons are pressed, you can use the MotionController prefab in the Mixed Reality Toolkit. 此 prefab 从系统的已安装运动控制器驱动程序在运行时动态加载正确的 glTF 模型。This prefab dynamically loads the correct glTF model at runtime from the system's installed motion controller driver. 必须动态加载这些模型,而不是在编辑器中手动导入它们,以便您的应用程序能够显示您的用户可能拥有的任何当前和未来控制器的物理上准确的3D 模型。It's important to load these models dynamically rather than importing them manually in the editor, so that your app will show physically accurate 3D models for any current and future controllers your users may have.

  1. 按照 入门 说明下载混合现实工具包,并将其添加到 Unity 项目。Follow the Getting Started instructions to download the Mixed Reality Toolkit and add it to your Unity project.
  2. 如果在入门步骤中将相机替换为 MixedRealityCameraParent prefab,则准备就绪!If you replaced your camera with the MixedRealityCameraParent prefab as part of the Getting Started steps, you're good to go! 该 prefab 包括运动控制器呈现。That prefab includes motion controller rendering. 否则,请从 "项目" 窗格中将 资产/HoloToolkit/Input/prototyping/MotionControllers 添加到场景中。Otherwise, add Assets/HoloToolkit/Input/Prefabs/MotionControllers.prefab into your scene from the Project pane. 你需要将该 prefab 添加为你用来移动相机的任何父对象的子对象,以使该用户在场景中 teleports 时,使控制器与用户一起工作。You'll want to add that prefab as a child of whatever parent object you use to move the camera around when the user teleports within your scene, so that the controllers come along with the user. 如果应用不涉及 teleporting,只需在场景的根目录中添加 prefab。If your app doesn't involve teleporting, just add the prefab at the root of your scene.

引发对象Throwing objects

在虚拟现实中引发对象比最初可能更难。Throwing objects in virtual reality is a harder problem than it may at first seem. 与大多数基于物理的交互一样,当游戏中的抛出意外时,它会立即出现,并打破浸入式。As with most physically based interactions, when throwing in game acts in an unexpected way, it's immediately obvious and breaks immersion. 我们花了一些时间来了解如何表示物理上正确的引发行为,并通过我们的平台的更新启用了一些准则,我们想要与你共享。We've spent some time thinking deeply about how to represent a physically correct throwing behavior, and have come up with a few guidelines, enabled through updates to our platform, that we would like to share with you.

可在 此处找到如何实现引发的示例。You can find an example of how we recommend to implement throwing here. 此示例遵循以下四个准则:This sample follows these four guidelines:

  • 使用控制器的 速度 (而不是位置)。Use the controller’s velocity instead of position. 在11月的 Windows 更新中,我们引入了 "近似" 位置跟踪状态下的行为更改。In the November update to Windows, we introduced a change in behavior when in the ''Approximate'' positional tracking state. 处于此状态时,将继续报告有关控制器的速率信息,前提是我们相信它的高准确性,这通常比位置长。When in this state, velocity information about the controller will continue to be reported for as long as we believe its high accuracy, which is often longer than position remains high accuracy.

  • **合并控制器的 *角度速度***。Incorporate the angular velocity of the controller. 此逻辑全部包含在 throwing.cs 文件中的静态方法中,该文件位于 GetThrownObjectVelAngVel 以上链接的包中:This logic is all contained in the throwing.cs file in the GetThrownObjectVelAngVel static method, within the package linked above:

    1. 在角度速度 conserved 时,引发的对象必须保持与抛出时相同的角度速度: objectAngularVelocity = throwingControllerAngularVelocity;As angular velocity is conserved, the thrown object must maintain the same angular velocity as it had at the moment of the throw: objectAngularVelocity = throwingControllerAngularVelocity;

    2. 由于引发的对象很大范围可能不会成为手柄姿势的源,因此它的速度可能比用户引用框架中控制器的速度不同。As the center of mass of the thrown object is likely not at the origin of the grip pose, it likely has a different velocity than that of the controller in the frame of reference of the user. 以这种方式提供的对象速度的部分是围绕控制器原点的已抛出对象的质量的瞬间相切速度。The portion of the object’s velocity contributed in this way is the instantaneous tangential velocity of the center of mass of the thrown object around the controller origin. 此相切速度是控制器角度速度的叉积,向量表示控制器原点与所引发对象的质量中心之间的距离。This tangential velocity is the cross product of the angular velocity of the controller with the vector representing the distance between the controller origin and the center of mass of the thrown object.

      Vector3 radialVec = thrownObjectCenterOfMass - throwingControllerPos;
      Vector3 tangentialVelocity = Vector3.Cross(throwingControllerAngularVelocity, radialVec);
      
    3. 引发的对象的总速度是控制器的速度与此相切速度的总和: objectVelocity = throwingControllerVelocity + tangentialVelocity;The total velocity of the thrown object is the sum of velocity of the controller and this tangential velocity: objectVelocity = throwingControllerVelocity + tangentialVelocity;

  • 密切 **关注我们应用速度的 *时间***。Pay close attention to the time at which we apply the velocity. 按下按钮时,该事件最多可能需要 20 ms 才能通过蓝牙向上向上冒泡到操作系统。When a button is pressed, it can take up to 20 ms for that event to bubble up through Bluetooth to the operating system. 这意味着,如果你将控制器状态更改从按下状态更改为未按下或其他方式,则控制器会提供你获取的信息。This means that if you poll for a controller state change from pressed to not pressed or the other way around, the controller pose information you get with it will actually be ahead of this change in state. 接下来,轮询 API 所提供的控制器可能是向前预测的,它会在帧显示时反映可能的情况,这可能会在未来超过 20 ms。Further, the controller pose presented by our polling API is forward predicted to reflect a likely pose at the time the frame will be displayed which could be more than 20 ms in the future. 这适用于 呈现 已保存的对象,但会在计算用户释放引发的时间轨迹时,为 目标 对象提供时间问题。This is good for rendering held objects, but compounds our time problem for targeting the object as we calculate the trajectory for the moment the user released the throw. 幸运的是,在11月更新中,当发送 Unity 事件(如 InteractionSourcePressedInteractionSourceReleased )时,当按下或释放该按钮时,该状态包含来自后端的历史记录数据。Fortunately, with the November update, when a Unity event like InteractionSourcePressed or InteractionSourceReleased is sent, the state includes the historical pose data from back when the button was pressed or released. 若要在引发期间获得最准确的控制器呈现和控制器目标,则必须根据需要正确使用轮询和事件:To get the most accurate controller rendering and controller targeting during throws, you must correctly use polling and eventing, as appropriate:

  • 使用手柄姿势Use the grip pose. 相对于手柄姿势报告角度速度和速度,而不是指针姿势。Angular velocity and velocity are reported relative to the grip pose, not pointer pose.

在将来的 Windows 更新中,引发将继续改进,你可以在此处找到详细信息。Throwing will continue to improve with future Windows updates, and you can expect to find more information on it here.

MRTK v2 中的手势和运动控制器Gesture and Motion Controllers in MRTK v2

可以从输入管理器访问笔势和运动控制器。You can access gesture and motion controller from the input Manager.

按照教程进行操作Follow along with tutorials

混合现实学院中提供了更详细的自定义示例的分步教程:Step-by-step tutorials, with more detailed customization examples, are available in the Mixed Reality Academy:

MR 输入 213-运动控制器MR Input 213 - Motion controller
MR 输入 213-运动控制器MR Input 213 - Motion controller

下一个开发检查点Next Development Checkpoint

如果遵循我们所说的 Unity 开发旅程,就是在浏览 MRTK 核心构建基块。If you're following the Unity development journey we've laid out, you're in the midst of exploring the MRTK core building blocks. 从这里,你可以继续了解下一部分基础知识:From here, you can continue to the next building block:

或跳转到混合现实平台功能和 API:Or jump to Mixed Reality platform capabilities and APIs:

你可以随时返回到 Unity 开发检查点You can always go back to the Unity development checkpoints at any time.

另请参阅See also