Unity 中的运动控制器Motion controllers in Unity

您可以通过两种主要方式在您的 HMD 中进行操作,在 HoloLens 和沉浸式的中的 手势运动控制器There are two key ways to take action on your gaze in Unity, hand gestures and motion controllers in HoloLens and Immersive HMD. 可以通过 Unity 中的相同 Api 访问空间输入的两个源的数据。You access the data for both sources of spatial input through the same APIs in Unity.

Unity 提供了两种主要方法来访问 Windows Mixed Reality 的空间输入数据。Unity provides two primary ways to access spatial input data for Windows Mixed Reality. 常见的 GetButton/GetAxis api 跨多个 Unity XR sdk 工作,而特定于 Windows Mixed Reality 的 InteractionManager/GestureRecognizer api 会公开一组完整的空间输入数据。The common Input.GetButton/Input.GetAxis APIs work across multiple Unity XR SDKs, while the InteractionManager/GestureRecognizer API specific to Windows Mixed Reality exposes the full set of spatial input data.

Unity XR 输入 ApiUnity XR input APIs

对于新项目,建议从头开始使用新的 XR 输入 Api。For new projects, we recommend using the new XR input APIs from the beginning.

可在此处找到有关 XR api的详细信息。You can find more information about the XR APIs here.

Unity 按钮/轴映射表Unity button/axis mapping table

适用于 Windows Mixed Reality 运动控制器的 Unity 输入管理器支持通过 GetButton/GetAxis api 在下面列出的按钮和轴 id。Unity's Input Manager for Windows Mixed Reality motion controllers supports the button and axis IDs listed below through the Input.GetButton/GetAxis APIs. "Windows MR 特定" 列是指 InteractionSourceState 类型可用的属性。The "Windows MR-specific" column refers to properties available off of the InteractionSourceState type. 以下各节将详细介绍其中的每个 Api。Each of these APIs is described in detail in the sections below.

Windows Mixed Reality 的按钮/轴 ID 映射通常与 Oculus 按钮/轴 Id 匹配。The button/axis ID mappings for Windows Mixed Reality generally match the Oculus button/axis IDs.

Windows Mixed Reality 的按钮/轴 ID 映射在以下两个方面不同于 OpenVR 的映射:The button/axis ID mappings for Windows Mixed Reality differ from OpenVR's mappings in two ways:

  1. 该映射使用不同于操纵杆的触摸板 Id,以支持同时具有 thumbsticks 和触摸板的控制器。The mapping uses touchpad IDs that are distinct from thumbstick, to support controllers with both thumbsticks and touchpads.
  2. 映射避免了对菜单按钮的 A 和 X 按钮 Id 进行重载,以使它们可用于物理 ABXY 按钮。The mapping avoids overloading the A and X button IDs for the Menu buttons to leave them available for the physical ABXY buttons.
输入Input 通用 Unity APICommon Unity APIs
(GetButton/GetAxis) (Input.GetButton/GetAxis)
Windows MR 专用输入 APIWindows MR-specific Input API
(XR。WSA.输入) (XR.WSA.Input)
左手Left hand 右手Right hand
选择触发按下Select trigger pressed 轴 9 = 1。0Axis 9 = 1.0 轴 10 = 1。0Axis 10 = 1.0 selectPressedselectPressed
选择触发器模拟值Select trigger analog value 轴9Axis 9 轴10Axis 10 selectPressedAmountselectPressedAmount
选择触发器部分按下Select trigger partially pressed 按钮 14 (游戏板兼容) Button 14 (gamepad compat) 按钮 15 (游戏板兼容) Button 15 (gamepad compat) selectPressedAmount > 0。0selectPressedAmount > 0.0
按下菜单按钮Menu button pressed 按钮 6 \*Button 6\* 按钮 7 \*Button 7\* menuPressedmenuPressed
按下手柄按钮Grip button pressed Axis 11 = 1.0 (没有模拟值) Axis 11 = 1.0 (no analog values)
按钮 4 (游戏板兼容) Button 4 (gamepad compat)
轴 12 = 1.0 (没有模拟值) Axis 12 = 1.0 (no analog values)
按钮 5 (游戏板兼容) Button 5 (gamepad compat)
graspedgrasped
操纵杆 X (左:-1.0,right: 1.0) Thumbstick X (left: -1.0, right: 1.0) 轴1Axis 1 轴4Axis 4 thumbstickPositionthumbstickPosition.x
操纵杆 Y (顶部:-1.0、底部: 1.0) Thumbstick Y (top: -1.0, bottom: 1.0) 轴2Axis 2 轴5Axis 5 thumbstickPositionthumbstickPosition.y
已按下操纵杆Thumbstick pressed 按钮8Button 8 按钮9Button 9 thumbstickPressedthumbstickPressed
触摸板 X (左:-1.0,右: 1.0) Touchpad X (left: -1.0, right: 1.0) 轴 17 \*Axis 17\* 轴 19 \*Axis 19\* touchpadPositiontouchpadPosition.x
触摸板 Y (顶部:-1.0、底部: 1.0) Touchpad Y (top: -1.0, bottom: 1.0) Axis 18 \*Axis 18\* 轴 20 \*Axis 20\* touchpadPositiontouchpadPosition.y
接触触摸板Touchpad touched 按钮 18 \*Button 18\* 按钮 19 \*Button 19\* touchpadTouchedtouchpadTouched
已按触摸板Touchpad pressed 按钮 16 \*Button 16\* 按钮 17 \*Button 17\* touchpadPressedtouchpadPressed
6DoF 手柄姿势或指针姿势6DoF grip pose or pointer pose 仅限抓握XR。InputTracking. GetLocalPositionGrip pose only: XR.InputTracking.GetLocalPosition
XR.InputTracking.GetLocalRotationXR.InputTracking.GetLocalRotation
Pass 手柄指针 作为参数: SourceState. sourcePose. TryGetPositionPass Grip or Pointer as an argument: sourceState.sourcePose.TryGetPosition
sourceState.sourcePose.TryGetRotationsourceState.sourcePose.TryGetRotation
跟踪状态Tracking state 位置准确性和源丢失风险仅通过 MR 专用 API 提供 Position accuracy and source loss risk only available through MR-specific API sourceState.sourcePose.positionAccuracysourceState.sourcePose.positionAccuracy
sourceState. sourceLossRisksourceState.properties.sourceLossRisk

备注

这些按钮/轴 Id 不同于 Unity 用于 OpenVR 的 Id,因为 gamepads、Oculus 触控和 OpenVR 所使用的映射中出现冲突。These button/axis IDs differ from the IDs that Unity uses for OpenVR due to collisions in the mappings used by gamepads, Oculus Touch and OpenVR.

手柄姿势与指针姿势Grip pose vs. pointing pose

Windows Mixed Reality 支持各种外形规格的运动控制器。Windows Mixed Reality supports motion controllers in a variety of form factors. 每个控制器的设计不同于用户的位置与应用在呈现控制器时使用的自然 "转发" 方向之间的关系。Each controller's design differs in its relationship between the user's hand position and the natural "forward" direction that apps should use for pointing when rendering the controller.

为了更好地表示这些控制器,可以针对每个交互源来调查两种类型的姿势, 手柄姿势指针姿势To better represent these controllers, there are two kinds of poses you can investigate for each interaction source, the grip pose and the pointer pose. 手柄姿势和指针姿势坐标都由全局 Unity 世界坐标中的所有 Unity Api 表示。Both the grip pose and pointer pose coordinates are expressed by all Unity APIs in global Unity world coordinates.

抓握姿势Grip pose

抓握姿势 表示用户掌的位置,由 HoloLens 检测到或持有运动控制器。The grip pose represents the location of the users palm, either detected by a HoloLens or holding a motion controller.

在沉浸式耳机上,手柄姿势最适合用于呈现 用户的手持有用户的对象On immersive headsets, the grip pose is best used to render the user's hand or an object held in the user's hand. 可视化运动控制器时也使用手柄姿势。The grip pose is also used when visualizing a motion controller. 用于运动控制器的 Windows 提供的 呈现模型 使用手柄姿势作为原点和旋转中心。The renderable model provided by Windows for a motion controller uses the grip pose as its origin and center of rotation.

手柄姿势的定义具体如下:The grip pose is defined specifically as follows:

  • 手柄位置:在固定控制器时,掌上质心,向左或向右调整以使其在手柄内居中。The grip position: The palm centroid when holding the controller naturally, adjusted left or right to center the position within the grip. 在 Windows Mixed Reality 运动控制器上,此位置通常与 "抓住" 按钮对齐。On the Windows Mixed Reality motion controller, this position generally aligns with the Grasp button.
  • 手柄方向的右轴:当你完全打开手形成一个平面的5指形姿势时,与你的掌上的光线 (从右手掌向后) The grip orientation's Right axis: When you completely open your hand to form a flat 5-finger pose, the ray that is normal to your palm (forward from left palm, backward from right palm)
  • 手柄方向的正向轴:当您关闭手中的部分 (时,就如同按住控制器) 一样,通过您的非拇指形来表示 "转发" 的射线。The grip orientation's Forward axis: When you close your hand partially (as if holding the controller), the ray that points "forward" through the tube formed by your non-thumb fingers.
  • 手柄方向的上轴:向右和向后定义隐含的上轴。The grip orientation's Up axis: The Up axis implied by the Right and Forward definitions.

可以通过 Unity 的跨供应商输入 API (XR 来访问抓握姿势 。InputTracking。GetLocalPosition/旋转) 或通过 WINDOWS MR (sourcePose,请求) 请求为 抓握 节点提供数据。You can access the grip pose through either Unity's cross-vendor input API (XR.InputTracking.GetLocalPosition/Rotation) or through the Windows MR-specific API (sourceState.sourcePose.TryGetPosition/Rotation, requesting pose data for the Grip node).

指针姿势Pointer pose

指针姿势 代表着控制器的末端。The pointer pose represents the tip of the controller pointing forward.

系统提供的指针姿势最适合用于在 呈现控制器模型本身 时进行 raycast。The system-provided pointer pose is best used to raycast when you're rendering the controller model itself. 如果要渲染某个其他虚拟对象来替代控制器(如虚拟压力),则应指出该虚拟对象的最自然的射线,如沿应用定义的机枪模型的桶向下移动的射线。If you're rendering some other virtual object in place of the controller, such as a virtual gun, you should point with a ray that's most natural for that virtual object, such as a ray that travels along the barrel of the app-defined gun model. 由于用户可以看到虚拟对象,而不是物理控制器,因此,使用虚拟对象指向虚拟对象可能会更自然地使用应用。Because users can see the virtual object and not the physical controller, pointing with the virtual object will likely be more natural for those using your app.

目前,指针姿势仅通过 Windows MR 专用 API TryGetPosition/轮换提供,并传入 InteractionSourceNode 作为参数传递 。 sourceState/sourcePose。Currently, the pointer pose is available in Unity only through the Windows MR-specific API, sourceState.sourcePose.TryGetPosition/Rotation, passing in InteractionSourceNode.Pointer as the argument.

控制器跟踪状态Controller tracking state

与耳机一样,Windows Mixed Reality 运动控制器不需要外部跟踪传感器的设置。Like the headsets, the Windows Mixed Reality motion controller requires no setup of external tracking sensors. 相反,控制器由耳机本身中的传感器跟踪。Instead, the controllers are tracked by sensors in the headset itself.

如果用户将控制器移出耳机的视图,则在大多数情况下,Windows 将继续推断控制器位置。If the user moves the controllers out of the headset's field of view, Windows continues to infer controller positions in most cases. 如果控制器丢失了足够长时间的视觉跟踪,控制器的位置将降到近似准确性位置。When the controller has lost visual tracking for long enough, the controller's positions will drop to approximate-accuracy positions.

此时,系统会将控制器正文锁定到用户,在移动用户时跟踪用户的位置,同时仍然使用其内部方向传感器公开控制器的真正方向。At this point, the system will body-lock the controller to the user, tracking the user's position as they move around, while still exposing the controller's true orientation using its internal orientation sensors. 许多使用控制器指向和激活 UI 元素的应用程序可以正常运行,而无需用户注意。Many apps that use controllers to point at and activate UI elements can operate normally while in approximate accuracy without the user noticing.

为此,最好的方法是亲自尝试一下。The best way to get a feel for this is to try it yourself. 查看此视频,其中包含可在各种跟踪状态下使用运动控制器的沉浸式内容的示例:Check out this video with examples of immersive content that works with motion controllers across various tracking states:


显式跟踪状态的推理Reasoning about tracking state explicitly

希望根据跟踪状态以不同方式对位置进行处理的应用可能会进一步检查控制器状态的属性,如 SourceLossRiskPositionAccuracyApps that wish to treat positions differently based on tracking state may go further and inspect properties on the controller's state, such as SourceLossRisk and PositionAccuracy:

跟踪状态Tracking state SourceLossRiskSourceLossRisk PositionAccuracyPositionAccuracy TryGetPositionTryGetPosition
高准确度 High accuracy < 1.0< 1.0 High true
高准确度 (丢失) 的风险 High accuracy (at risk of losing) = = 1。0== 1.0 High true
近似准确度 Approximate accuracy = = 1。0== 1.0 近似Approximate true
无位置 No position = = 1。0== 1.0 近似Approximate falsefalse

这些运动控制器跟踪状态的定义如下:These motion controller tracking states are defined as follows:

  • 高准确度: 尽管运动控制器位于耳机的视图中,但它通常会根据视觉对象跟踪提供高准确性位置。High accuracy: While the motion controller is within the headset's field of view, it will generally provide high-accuracy positions, based on visual tracking. 一种移动控制器,该控制器暂时离开了 "查看" 字段或暂时不能从耳机传感器中遮盖 (例如,根据用户的另一种情况,) 将继续根据控制器本身的惯性跟踪来返回高准确度姿势。A moving controller that momentarily leaves the field of view or is momentarily obscured from the headset sensors (e.g. by the user's other hand) will continue to return high-accuracy poses for a short time, based on inertial tracking of the controller itself.
  • 高准确度 (丢失) 的风险: 当用户将运动控制器移出耳机的视图边缘后,耳机不久就无法直观地跟踪控制器的位置。High accuracy (at risk of losing): When the user moves the motion controller past the edge of the headset's field of view, the headset will soon be unable to visually track the controller's position. 此应用通过查看 SourceLossRisk 到1.0,来了解控制器何时到达此 FOV 边界。The app knows when the controller has reached this FOV boundary by seeing the SourceLossRisk reach 1.0. 此时,应用程序可以选择暂停需要稳定的高质量姿势流的控制器手势。At that point, the app may choose to pause controller gestures that require a steady stream of high quality poses.
  • 近似准确性: 如果控制器丢失了足够长时间的视觉跟踪,控制器的位置将降到近似准确性位置。Approximate accuracy: When the controller has lost visual tracking for long enough, the controller's positions will drop to approximate-accuracy positions. 此时,系统会将控制器正文锁定到用户,在移动用户时跟踪用户的位置,同时仍然使用其内部方向传感器公开控制器的真正方向。At this point, the system will body-lock the controller to the user, tracking the user's position as they move around, while still exposing the controller's true orientation using its internal orientation sensors. 许多使用控制器指向和激活 UI 元素的应用程序可以正常运行,而无需用户注意。Many apps that use controllers to point at and activate UI elements can operate as normal while in approximate accuracy without the user noticing. 对于输入要求较高的应用,可以通过检查 PositionAccuracy 属性,从 准确度到 接近 准确性,从而为用户提供更多的 hitbox。Apps with heavier input requirements may choose to sense this drop from High accuracy to Approximate accuracy by inspecting the PositionAccuracy property, for example to give the user a more generous hitbox on off-screen targets during this time.
  • 无位置: 尽管控制器可以在很长时间内正常运行,但有时系统也知道,甚至在当前情况下,该位置都没有意义。No position: While the controller can operate at approximate accuracy for a long time, sometimes the system knows that even a body-locked position isn't meaningful at the moment. 例如,已打开的控制器可能从未被可视化地观察到,或者用户可能会关闭控制器,然后由其他人选取。For example, a controller that was turned on may have never been observed visually, or a user may put down a controller that's then picked up by someone else. 在这种情况下,系统不会提供应用程序的任何位置,并且 TryGetPosition 将返回 false。At those times, the system won't provide any position to the app, and TryGetPosition will return false.

常见 Unity Api (GetButton/GetAxis) Common Unity APIs (Input.GetButton/GetAxis)

命名空间: UnityEngineUnityEngine. XRNamespace: UnityEngine, UnityEngine.XR
类型输入XR。InputTrackingTypes: Input, XR.InputTracking

Unity 目前使用其常规 输入. GetButton/GetAxis Api 公开 Oculus SDKOpenVR SDK 和 Windows Mixed Reality 的输入,包括双手和运动控制器。Unity currently uses its general Input.GetButton/Input.GetAxis APIs to expose input for the Oculus SDK, the OpenVR SDK and Windows Mixed Reality, including hands and motion controllers. 如果你的应用程序使用这些 Api 进行输入,则它可以在多个 XR Sdk (包括 Windows Mixed Reality)中轻松支持运动控制器。If your app uses these APIs for input, it can easily support motion controllers across multiple XR SDKs, including Windows Mixed Reality.

获取逻辑按钮的按下状态Getting a logical button's pressed state

若要使用一般 Unity 输入 Api,通常先将按钮和轴向上滑到 Unity 输入管理器中的逻辑名称,然后将按钮或轴 id 绑定到每个名称。To use the general Unity input APIs, you'll typically start by wiring up buttons and axes to logical names in the Unity Input Manager, binding a button or axis IDs to each name. 然后,你可以编写引用该逻辑按钮/轴名称的代码。You can then write code that refers to that logical button/axis name.

例如,若要将左运动控制器的触发器按钮映射到 "提交" 操作,请在 Unity 内执行 " 编辑 > 项目设置" > 输入 ,然后展开 "轴" 下 "提交" 部分的属性。For example, to map the left motion controller's trigger button to the Submit action, go to Edit > Project Settings > Input within Unity, and expand the properties of the Submit section under Axes. 更改 " 正按钮 " 或 " Alt 正 向按钮" 属性以阅读 游戏杆按钮 14,如下所示:Change the Positive Button or Alt Positive Button property to read joystick button 14, like this:

Unity 的 InputManagerUnity's InputManager
Unity InputManagerUnity InputManager

然后,你的脚本可以使用 GetButton 检查提交操作:Your script can then check for the Submit action using Input.GetButton:

if (Input.GetButton("Submit"))
{
  // ...
}

可以通过在 "" 下更改 " Size " 属性来添加更多逻辑按钮。You can add more logical buttons by changing the Size property under Axes.

直接获取物理按钮的按下状态Getting a physical button's pressed state directly

还可以使用 GetKey,通过其完全限定的名称手动访问按钮:You can also access buttons manually by their fully qualified name, using Input.GetKey:

if (Input.GetKey("joystick button 8"))
{
  // ...
}

获取手或运动控制器的姿势Getting a hand or motion controller's pose

可以使用 XR 访问控制器的位置和旋转 。InputTrackingYou can access the position and rotation of the controller, using XR.InputTracking:

Vector3 leftPosition = InputTracking.GetLocalPosition(XRNode.LeftHand);
Quaternion leftRotation = InputTracking.GetLocalRotation(XRNode.LeftHand);

备注

上面的代码代表控制器的手柄姿势 (其中,用户在其中保存控制器) ,这对于渲染用户的剑或压力或控制器本身的模型非常有用。The above code represents the controller's grip pose (where the user holds the controller), which is useful for rendering a sword or gun in the user's hand, or a model of the controller itself.

这种控制手柄的关系导致,指针 (在控制器的笔尖指向) 在控制器之间可能会有所不同。The relationship between this grip pose and the pointer pose (where the tip of the controller is pointing) may differ across controllers. 目前,仅通过 MR 专用输入 API (在以下各节中介绍)才能访问控制器的指针姿势。At this moment, accessing the controller's pointer pose is only possible through the MR-specific input API, described in the sections below.

(XR 的特定于 Windows 的 Api。WSA.输入) Windows-specific APIs (XR.WSA.Input)

注意

如果你的项目使用的是任何 XR。WSA Api 在未来的 Unity 版本中,这些 Api 将在 XR SDK 的后面逐步推出。If your project is using any of the XR.WSA APIs, these are being phased out in favor of the XR SDK in future Unity releases. 对于新项目,建议从头开始使用 XR SDK。For new projects, we recommend using the XR SDK from the beginning. 可在此处找到有关 XR 输入系统和 api的详细信息。You can find more information about the XR Input system and APIs here.

命名空间: UnityEngine. XRNamespace: UnityEngine.XR.WSA.Input
类型InteractionManagerInteractionSourceStateInteractionSourceInteractionSourcePropertiesInteractionSourceKindInteractionSourceLocationTypes: InteractionManager, InteractionSourceState, InteractionSource, InteractionSourceProperties, InteractionSourceKind, InteractionSourceLocation

若要获取有关 HoloLens) 和运动控制器的 Windows Mixed Reality 手型输入 (的更多详细信息,可以选择在 XR 命名空间下使用特定于 windows 的空间输入 api。To get at more detailed information about Windows Mixed Reality hand input (for HoloLens) and motion controllers, you can choose to use the Windows-specific spatial input APIs under the UnityEngine.XR.WSA.Input namespace. 这样,你就可以访问其他信息,例如位置准确性或源种类,使你可以区分手和控制器。This lets you access additional information, such as position accuracy or the source kind, letting you tell hands and controllers apart.

对双手和运动控制器的状态进行轮询Polling for the state of hands and motion controllers

使用 GetCurrentReading 方法,可以轮询每个交互源 (手型或运动控制器) 的此帧状态。You can poll for this frame's state for each interaction source (hand or motion controller) using the GetCurrentReading method.

var interactionSourceStates = InteractionManager.GetCurrentReading();
foreach (var interactionSourceState in interactionSourceStates) {
    // ...
}

返回的每个 InteractionSourceState 表示当前时刻的交互源。Each InteractionSourceState you get back represents an interaction source at the current moment in time. InteractionSourceState 显示如下信息:The InteractionSourceState exposes info such as:

  • (选择/菜单/抓住/触摸板/操纵杆) 出现哪些类型的按下Which kinds of presses are occurring (Select/Menu/Grasp/Touchpad/Thumbstick)

    if (interactionSourceState.selectPressed) {
         // ...
    }
    
  • 其他特定于运动控制器的数据,例如触摸板和/或操纵杆的 XY 坐标和接触状态Other data specific to motion controllers, such the touchpad and/or thumbstick's XY coordinates and touched state

    if (interactionSourceState.touchpadTouched && interactionSourceState.touchpadPosition.x > 0.5) {
         // ...
    }
    
  • 要了解源是手型还是运动控制器的 InteractionSourceKindThe InteractionSourceKind to know if the source is a hand or a motion controller

    if (interactionSourceState.source.kind == InteractionSourceKind.Hand) {
         // ...
    }
    

轮询前向预测呈现姿势Polling for forward-predicted rendering poses

  • 当轮询来自双手和控制器的交互源数据时,您获得的是向前预测的,这是框架的 photons 将达到用户的眼睛。When polling for interaction source data from hands and controllers, the poses you get are forward-predicted poses for the moment in time when this frame's photons will reach the user's eyes. 向前预测的姿势最适合用于 呈现 控制器或每个帧的持有对象。Forward-predicted poses are best used for rendering the controller or a held object each frame. 如果使用的是针对控制器的给定按下或发布,则在使用下面所述的历史事件 Api 时,这将是最准确的。If you're targeting a given press or release with the controller, that will be most accurate if you use the historical event APIs described below.

    var sourcePose = interactionSourceState.sourcePose;
    Vector3 sourceGripPosition;
    Quaternion sourceGripRotation;
    if ((sourcePose.TryGetPosition(out sourceGripPosition, InteractionSourceNode.Grip)) &&
         (sourcePose.TryGetRotation(out sourceGripRotation, InteractionSourceNode.Grip))) {
         // ...
    }
    
  • 你还可以获取此当前帧的前推预测 head。You can also get the forward-predicted head pose for this current frame. 与源姿势一样,这对于 呈现 游标非常有用,但如果你使用下面所述的历史事件 api,则以给定的按下或释放为目标是最准确的。As with the source pose, this is useful for rendering a cursor, although targeting a given press or release will be most accurate if you use the historical event APIs described below.

    var headPose = interactionSourceState.headPose;
    var headRay = new Ray(headPose.position, headPose.forward);
    RaycastHit raycastHit;
    if (Physics.Raycast(headPose.position, headPose.forward, out raycastHit, 10)) {
         var cursorPos = raycastHit.point;
         // ...
    }
    

处理交互源事件Handling interaction source events

若要在输入事件的准确历史记录数据发生时处理输入事件,可处理交互源事件,而不是轮询。To handle input events as they happen with their accurate historical pose data, you can handle interaction source events instead of polling.

处理交互源事件:To handle interaction source events:

  • 注册 InteractionManager 输入事件。Register for a InteractionManager input event. 对于你感兴趣的每种类型的交互事件,你需要订阅该事件。For each type of interaction event that you are interested in, you need to subscribe to it.

    InteractionManager.InteractionSourcePressed += InteractionManager_InteractionSourcePressed;
    
  • 处理事件。Handle the event. 订阅交互事件后,会在适当的时候获取回调。Once you have subscribed to an interaction event, you will get the callback when appropriate. SourcePressed 示例中,这会在检测到源后、发布或丢失之前。In the SourcePressed example, this will be after the source was detected and before it is released or lost.

    void InteractionManager_InteractionSourceDetected(InteractionSourceDetectedEventArgs args)
         var interactionSourceState = args.state;
    
         // args.state has information about:
            // targeting head ray at the time when the event was triggered
            // whether the source is pressed or not
            // properties like position, velocity, source loss risk
            // source id (which hand id for example) and source kind like hand, voice, controller or other
    }
    

如何停止处理事件How to stop handling an event

如果不再对事件感兴趣或正在销毁已订阅事件的对象,则需要停止处理事件。You need to stop handling an event when you're no longer interested in the event or you're destroying the object that has subscribed to the event. 若要停止处理事件,请取消订阅该事件。To stop handling the event, you unsubscribe from the event.

InteractionManager.InteractionSourcePressed -= InteractionManager_InteractionSourcePressed;

交互源事件的列表List of interaction source events

可用的交互源事件包括:The available interaction source events are:

  • InteractionSourceDetected (源成为活动) InteractionSourceDetected (source becomes active)
  • InteractionSourceLost (变为非活动状态) InteractionSourceLost (becomes inactive)
  • InteractionSourcePressed (点击,按下按钮或 "选择" 失措) InteractionSourcePressed (tap, button press, or "Select" uttered)
  • InteractionSourceReleased (点击、按钮已释放或结束 "Select" 失措) InteractionSourceReleased (end of a tap, button released, or end of "Select" uttered)
  • InteractionSourceUpdated (移动或以其他方式更改某种状态) InteractionSourceUpdated (moves or otherwise changes some state)

最准确地匹配新闻或发布的历史目标姿势的事件Events for historical targeting poses that most accurately match a press or release

前面所述的轮询 Api 为应用程序带来了前推预测的姿势。The polling APIs described earlier give your app forward-predicted poses. 尽管这些预测的姿势最适用于渲染控制器或虚拟手持对象,但出于以下两个重要原因,未来的姿势并不是最佳目标:While those predicted poses are best for rendering the controller or a virtual handheld object, future poses aren't optimal for targeting, for two key reasons:

  • 当用户按下控制器上的某个按钮时,系统收到该按键之前,蓝牙上可能会出现大约 20 ms 的无线延迟。When the user presses a button on a controller, there can be about 20 ms of wireless latency over Bluetooth before the system receives the press.
  • 然后,如果您使用的是前推预测型,则会有另一个 10-20 ms 应用于当前帧的 photons 将达到用户眼睛的时间。Then, if you're using a forward-predicted pose, there would be another 10-20 ms of forward prediction applied to target the time when the current frame's photons will reach the user's eyes.

这意味着,轮询会为你提供源姿势或 head 型,即从用户的头和用户在按下或释放发生时实际返回的位置为30-40 毫秒。This means that polling gives you a source pose or head pose that is 30-40 ms forward from where the user's head and hands actually were back when the press or release happened. 对于 HoloLens 手写输入,在没有无线传输延迟的情况下,可以使用类似的处理延迟来检测按键。For HoloLens hand input, while there's no wireless transmission delay, there's a similar processing delay to detect the press.

若要精确地根据用户的原始意图或按下控制器,应使用来自该 InteractionSourcePressedInteractionSourceReleased 输入事件的历史源姿势或 head 姿势。To accurately target based on the user's original intent for a hand or controller press, you should use the historical source pose or head pose from that InteractionSourcePressed or InteractionSourceReleased input event.

你可以从用户的头或其控制器使用历史姿势数据作为新闻或发布的目标:You can target a press or release with historical pose data from the user's head or their controller:

  • 当该笔势或控制器按下时,头姿势会出现, 可用于确定 用户的 gazingThe head pose at the moment in time when a gesture or controller press occurred, which can be used for targeting to determine what the user was gazing at:

    void InteractionManager_InteractionSourcePressed(InteractionSourcePressedEventArgs args) {
         var interactionSourceState = args.state;
         var headPose = interactionSourceState.headPose;
         RaycastHit raycastHit;
         if (Physics.Raycast(headPose.position, headPose.forward, out raycastHit, 10)) {
             var targetObject = raycastHit.collider.gameObject;
             // ...
         }
    }
    
  • 当运动控制器按下时,源会出现,这可用于确定用户在何处定位控制器的 目标The source pose at the moment in time when a motion controller press occurred, which can be used for targeting to determine what the user was pointing the controller at. 这将是遇到按下的控制器的状态。This will be the state of the controller that experienced the press. 如果要渲染控制器本身,则可以请求指针(而不是手柄姿势),以从用户将考虑呈现的控制器的自然提示的那种情况下生成目标射线:If you're rendering the controller itself, you can request the pointer pose rather than the grip pose, to shoot the targeting ray from what the user will consider the natural tip of that rendered controller:

    void InteractionManager_InteractionSourcePressed(InteractionSourcePressedEventArgs args)
    {
         var interactionSourceState = args.state;
         var sourcePose = interactionSourceState.sourcePose;
         Vector3 sourceGripPosition;
         Quaternion sourceGripRotation;
         if ((sourcePose.TryGetPosition(out sourceGripPosition, InteractionSourceNode.Pointer)) &&
             (sourcePose.TryGetRotation(out sourceGripRotation, InteractionSourceNode.Pointer))) {
             RaycastHit raycastHit;
             if (Physics.Raycast(sourceGripPosition, sourceGripRotation * Vector3.forward, out raycastHit, 10)) {
                 var targetObject = raycastHit.collider.gameObject;
                 // ...
             }
         }
    }
    

事件处理程序示例Event handlers example

using UnityEngine.XR.WSA.Input;

void Start()
{
    InteractionManager.InteractionSourceDetected += InteractionManager_InteractionSourceDetected;
    InteractionManager.InteractionSourceLost += InteractionManager_InteractionSourceLost;
    InteractionManager.InteractionSourcePressed += InteractionManager_InteractionSourcePressed;
    InteractionManager.InteractionSourceReleased += InteractionManager_InteractionSourceReleased;
    InteractionManager.InteractionSourceUpdated += InteractionManager_InteractionSourceUpdated;
}

void OnDestroy()
{
    InteractionManager.InteractionSourceDetected -= InteractionManager_InteractionSourceDetected;
    InteractionManager.InteractionSourceLost -= InteractionManager_InteractionSourceLost;
    InteractionManager.InteractionSourcePressed -= InteractionManager_InteractionSourcePressed;
    InteractionManager.InteractionSourceReleased -= InteractionManager_InteractionSourceReleased;
    InteractionManager.InteractionSourceUpdated -= InteractionManager_InteractionSourceUpdated;
}

void InteractionManager_InteractionSourceDetected(InteractionSourceDetectedEventArgs args)
{
    // Source was detected
    // args.state has the current state of the source including id, position, kind, etc.
}

void InteractionManager_InteractionSourceLost(InteractionSourceLostEventArgs state)
{
    // Source was lost. This will be after a SourceDetected event and no other events for this
    // source id will occur until it is Detected again
    // args.state has the current state of the source including id, position, kind, etc.
}

void InteractionManager_InteractionSourcePressed(InteractionSourcePressedEventArgs state)
{
    // Source was pressed. This will be after the source was detected and before it is
    // released or lost
    // args.state has the current state of the source including id, position, kind, etc.
}

void InteractionManager_InteractionSourceReleased(InteractionSourceReleasedEventArgs state)
{
    // Source was released. The source would have been detected and pressed before this point.
    // This event will not fire if the source is lost
    // args.state has the current state of the source including id, position, kind, etc.
}

void InteractionManager_InteractionSourceUpdated(InteractionSourceUpdatedEventArgs state)
{
    // Source was updated. The source would have been detected before this point
    // args.state has the current state of the source including id, position, kind, etc.
}

MRTK v2 中的运动控制器Motion Controllers in MRTK v2

可以从输入管理器访问 笔势和运动控制器You can access gesture and motion controller from the input Manager.

按照教程进行操作Follow along with tutorials

混合现实学院中提供了更详细的自定义示例的分步教程:Step-by-step tutorials, with more detailed customization examples, are available in the Mixed Reality Academy:

MR 输入 213-运动控制器MR Input 213 - Motion controller
MR 输入 213-运动控制器MR Input 213 - Motion controller

下一个开发检查点Next Development Checkpoint

如果遵循我们所说的 Unity 开发旅程,就是在浏览 MRTK 核心构建基块。If you're following the Unity development journey we've laid out, you're in the midst of exploring the MRTK core building blocks. 从这里,你可以继续了解下一部分基础知识:From here, you can continue to the next building block:

或跳转到混合现实平台功能和 API:Or jump to Mixed Reality platform capabilities and APIs:

你可以随时返回到 Unity 开发检查点You can always go back to the Unity development checkpoints at any time.

另请参阅See also