DirectX 中的手部和運動控制器Hands and motion controllers in DirectX

注意

本文與舊版 WinRT 原生 Api 相關。This article relates to the legacy WinRT native APIs. 針對新的原生應用程式專案,建議使用 OPENXR APIFor new native app projects, we recommend using the OpenXR API.

在 Windows Mixed Reality 中,會透過空間輸入 Api 來處理手和 移動控制器 的輸入,這些都是在 [ 輸入 ] 命名空間中找到。In Windows Mixed Reality, both hand and motion controller input is handled through the spatial input APIs, found in the Windows.UI.Input.Spatial namespace. 這可讓您輕鬆地處理常見的動作,例如 選取 在雙手和移動控制器之間按相同方式。This enables you to easily handle common actions like Select presses the same way across both hands and motion controllers.

開始使用Getting started

若要存取 Windows Mixed Reality 中的空間輸入,請從 SpatialInteractionManager 介面開始。To access spatial input in Windows Mixed Reality, start with the SpatialInteractionManager interface. 您可以藉由呼叫 SpatialInteractionManager:: GetForCurrentView來存取此介面,通常是在應用程式啟動期間的一段時間。You can access this interface by calling SpatialInteractionManager::GetForCurrentView, typically sometime during app startup.

using namespace winrt::Windows::UI::Input::Spatial;

SpatialInteractionManager interactionManager = SpatialInteractionManager::GetForCurrentView();

SpatialInteractionManager 的工作是提供 SpatialInteractionSources(代表輸入來源)的存取權。The SpatialInteractionManager's job is to provide access to SpatialInteractionSources, which represent a source of input. 系統中有三種可用的 SpatialInteractionSources。There are three kinds of SpatialInteractionSources available in the system.

  • 代表使用者偵測到的手。Hand represents a user's detected hand. 手來源提供以裝置為基礎的不同功能,範圍從 HoloLens 上的基本手勢,到 HoloLens 2 上的完整明確表述的手動追蹤。Hand sources offer different features based on the device, ranging from basic gestures on HoloLens to fully articulated hand tracking on HoloLens 2.
  • 控制器 代表配對的移動控制器。Controller represents a paired motion controller. 移動控制器可提供不同的功能,例如,[選取觸發程式]、[功能表] 按鈕、[功能表]、[touchpads] 和 [thumbsticks]。Motion controllers can offer different capabilities, for example, Select triggers, Menu buttons, Grasp buttons, touchpads, and thumbsticks.
  • 語音 代表使用者的語音說話系統偵測到的關鍵字。Voice represents the user's voice speaking system-detected keywords. 例如,每當使用者說「選取」時,此來源就會插入 Select 按下和放開。For example, this source will inject a Select press and release whenever the user says "Select".

來源的每個畫面格資料會以 SpatialInteractionSourceState 介面表示。Per-frame data for a source is represented by the SpatialInteractionSourceState interface. 有兩種不同的方式可存取這項資料,取決於您是否想要在應用程式中使用事件驅動或輪詢型模型。There are two different ways to access this data, depending on whether you want to use an event-driven or polling-based model in your application.

事件驅動的輸入Event-driven input

SpatialInteractionManager 會提供您的應用程式可接聽的一些事件。The SpatialInteractionManager provides a number of events that your app can listen for. 其中一些範例包括 SourcePressed、[SourceReleased 和 SourceUpdatedA few examples include SourcePressed, [SourceReleased, and SourceUpdated.

例如,下列程式碼會將名為 MyApp:: OnSourcePressed 的事件處理常式連結至 SourcePressed 事件。For example, the following code hooks up an event handler called MyApp::OnSourcePressed to the SourcePressed event. 這可讓您的應用程式偵測到任何類型的互動來源上按下。This allows your app to detect presses on any type of interaction source.

using namespace winrt::Windows::UI::Input::Spatial;

auto interactionManager = SpatialInteractionManager::GetForCurrentView();
interactionManager.SourcePressed({ this, &MyApp::OnSourcePressed });

此按下的事件會以非同步方式傳送至您的應用程式,並在發生按下時與對應的 SpatialInteractionSourceState。This pressed event is sent to your app asynchronously, along with the corresponding SpatialInteractionSourceState at the time the press happened. 您的應用程式或遊戲引擎可能會想要立即開始處理,或在輸入處理常式中將事件資料排入佇列。Your app or game engine may want to start processing right away or queue up the event data in your input processing routine. 以下是 SourcePressed 事件的事件處理常式函數,它會檢查是否已按下 [選取] 按鈕。Here's an event handler function for the SourcePressed event, which checks whether the select button has been pressed.

using namespace winrt::Windows::UI::Input::Spatial;

void MyApp::OnSourcePressed(SpatialInteractionManager const& sender, SpatialInteractionSourceEventArgs const& args)
{
    if (args.PressKind() == SpatialInteractionPressKind::Select)
    {
        // Select button was pressed, update app state
    }
}

上述程式碼只會檢查與裝置上的主要動作對應的「選取」按下動作。The above code only checks for the 'Select' press, which corresponds to the primary action on the device. 範例包括在 HoloLens 上執行 AirTap,或在移動控制器上提取觸發程式。Examples include doing an AirTap on HoloLens or pulling the trigger on a motion controller. 「選取」按下代表使用者想要啟用其目標的全像影像。'Select' presses represent the user's intention to activate the hologram they're targeting. SourcePressed 事件將會針對一些不同的按鈕和手勢引發,而且您可以檢查 SpatialInteractionSource 上的其他屬性,以測試這些案例。The SourcePressed event will fire for a number of different buttons and gestures, and you can inspect other properties on the SpatialInteractionSource to test for those cases.

以輪詢為基礎的輸入Polling-based input

您也可以使用 SpatialInteractionManager 來輪詢每個畫面格中輸入的目前狀態。You can also use SpatialInteractionManager to poll for the current state of input every frame. 若要這樣做,請呼叫 GetDetectedSourcesAtTimestamp 每個畫面格。To do this, call GetDetectedSourcesAtTimestamp every frame. 此函數會傳回陣列,其中包含每個使用中SpatialInteractionSource的一個SpatialInteractionSourceStateThis function returns an array containing one SpatialInteractionSourceState for every active SpatialInteractionSource. 這表示每個作用中的動作控制器都有一個,每個追蹤的手都有一個,而如果最近從未提到了 ' select ' 命令,則會有一個適用于語音。This means one for each active motion controller, one for each tracked hand, and one for speech if a 'select' command was recently uttered. 然後,您可以檢查每個 SpatialInteractionSourceState 上的屬性,以將輸入驅動至您的應用程式。You can then inspect the properties on each SpatialInteractionSourceState to drive input into your application.

以下是如何使用輪詢方法檢查「選取」動作的範例。Here's an example of how to check for the 'select' action using the polling method. 預測 變數代表可從 HolographicFrame取得的 HolographicFramePrediction物件。The prediction variable represents a HolographicFramePrediction object, which can be obtained from the HolographicFrame.

using namespace winrt::Windows::UI::Input::Spatial;

auto interactionManager = SpatialInteractionManager::GetForCurrentView();
auto sourceStates = m_spatialInteractionManager.GetDetectedSourcesAtTimestamp(prediction.Timestamp());

for (auto& sourceState : sourceStates)
{
    if (sourceState.IsSelectPressed())
    {
        // Select button is down, update app state
    }
}

每個 SpatialInteractionSource 都有一個識別碼,可讓您用來識別新的來源,並將現有的來源與框架產生相互關聯。Each SpatialInteractionSource has an ID, which you can use to identify new sources and correlate existing sources from frame to frame. 在每次離開時都能取得新的識別碼,然後輸入 FOV,但在會話期間,控制器識別碼仍保持靜態。Hands get a new ID every time they leave and enter the FOV, but controller IDs remain static for the duration of the session. 您可以使用 SpatialInteractionManager (例如 SourceDetectedSourceLost)上的事件,以在進入或離開裝置的視圖時回應,或在動作控制器開啟/關閉或配對/不成對時做出反應。You can use the events on SpatialInteractionManager such as SourceDetected and SourceLost, to react when hands enter or leave the device's view, or when motion controllers are turned on/off or are paired/unpaired.

預測與歷程記錄Predicted vs. historical poses

GetDetectedSourcesAtTimestamp 具有時間戳記參數。GetDetectedSourcesAtTimestamp has a timestamp parameter. 這可讓您要求狀態,並提出預測或歷程記錄的資料,讓您可以將空間互動與其他輸入來源相互關聯。This enables you to request state and pose data that is either predicted or historical, letting you correlate spatial interactions with other sources of input. 例如,在目前的框架中呈現手的位置時,您可以傳入 HolographicFrame所提供的預測時間戳記。For example, when rendering the hand's position in the current frame, you can pass in the predicted timestamp provided by the HolographicFrame. 這可讓系統向前預測手的位置,以便與轉譯的畫面格輸出緊密對齊,並將察覺的延遲降至最低。This enables the system to forward-predict the hand position to closely align with the rendered frame output, minimizing perceived latency.

不過,這類預測的姿勢並不會產生以互動來源為目標的理想指標光線。However, such a predicted pose doesn't produce an ideal pointing ray for targeting with an interaction source. 例如,按下 [移動控制器] 按鈕時,該事件最多可能需要20毫秒的時間,才能透過藍牙向上到作業系統。For example, when a motion controller button is pressed, it can take up to 20 ms for that event to bubble up through Bluetooth to the operating system. 同樣地,在使用者進行手勢手勢之後,某些時間可能會在系統偵測到手勢之前通過,而您的應用程式則會進行輪詢。Similarly, after a user does a hand gesture, some amount of time may pass before the system detects the gesture and your app then polls for it. 當您的應用程式輪詢狀態變更時,用來鎖定的 head 和手也會實際發生在過去。By the time your app polls for a state change, the head and hand poses used to target that interaction actually happened in the past. 如果您的目標是將目前的 HolographicFrame 時間戳記傳遞給 GetDetectedSourcesAtTimestamp,則會改為在畫面格顯示時,將姿勢預測到目標光線(未來可能超過20毫秒)。If you target by passing your current HolographicFrame's timestamp to GetDetectedSourcesAtTimestamp, the pose will instead be forward predicted to the targeting ray at the time the frame will be displayed, which could be more than 20 ms in the future. 這種未來的原因適用于 呈現 互動來源,但會將我們的時間問題轉譯為 目標 ,因為使用者的目標是在過去發生。This future pose is good for rendering the interaction source, but compounds our time problem for targeting the interaction, as the user's targeting occurred in the past.

幸運的是, SourcePressed、[SourceReleased 和 SourceUpdated 事件會提供與每個輸入事件相關聯的歷程記錄 狀態Fortunately, the SourcePressed, [SourceReleased, and SourceUpdated events provide the historical State associated with each input event. 這會直接包括歷程記錄,以及可透過 TryGetPointerPose取得的歷史時間戳記,以及您可以傳遞給其他 api 來與此事件相互關聯的歷程 時間戳記This directly includes the historical head and hand poses available through TryGetPointerPose, along with a historical Timestamp that you can pass to other APIs to correlate with this event.

這會在每個畫面格的手和控制器呈現和鎖定時,產生下列最佳作法:That leads to the following best practices when rendering and targeting with hands and controllers each frame:

  • 針對呈現每個畫面格的 手入/控制器 ,您的應用程式應該在目前畫面格的 photon 時間 輪詢 每個互動來源的 向前預測 姿勢。For hand/controller rendering each frame, your app should poll for the forward-predicted pose of each interaction source at the current frame’s photon time. 您可以藉由呼叫 GetDetectedSourcesAtTimestamp 每個畫面格,並傳入 HolographicFrame:: CurrentPrediction所提供的預測時間戳記,來輪詢所有的互動來源。You can poll for all interaction sources by calling GetDetectedSourcesAtTimestamp each frame, passing in the predicted timestamp provided by HolographicFrame::CurrentPrediction.
  • 針對以按下或放開為目標的站上 /控制器 ,您的應用程式應該處理已按下/已釋放的 事件,並根據該事件的歷程 記錄 標頭或手 raycasting。For hand/controller targeting upon a press or release, your app should handle pressed/released events, raycasting based on the historical head or hand pose for that event. 您可以藉由處理 SourcePressedSourceReleased 事件、從事件引數取得 State 屬性,然後呼叫其 TryGetPointerPose 方法,取得此目標光線。You get this targeting ray by handling the SourcePressed or SourceReleased event, getting the State property from the event arguments, and then calling its TryGetPointerPose method.

跨裝置輸入屬性Cross-device input properties

SpatialInteractionSource API 支援具有各式各樣功能的控制器和手追蹤系統。The SpatialInteractionSource API supports controllers and hand tracking systems with a wide range of capabilities. 裝置類型之間有許多常見的功能。A number of these capabilities are common between device types. 例如,手追蹤和移動控制器都提供「選取」動作和3D 位置。For example, hand tracking and motion controllers both provide a 'select' action and a 3D position. API 盡可能將這些常見的功能對應至 SpatialInteractionSource 上的相同屬性。Wherever possible, the API maps these common capabilities to the same properties on the SpatialInteractionSource. 這可讓應用程式更輕鬆地支援廣泛的輸入類型。This enables applications to more easily support a wide range of input types. 下表描述支援的屬性,以及它們如何在輸入類型之間進行比較。The following table describes the properties that are supported, and how they compare across input types.

屬性Property 描述Description HoloLens (第1代) 手勢HoloLens(1st gen) Gestures 移動控制器Motion Controllers 明確表述的手Articulated Hands
SpatialInteractionSource::HandednessSpatialInteractionSource::Handedness 右邊或左邊的/控制器。Right or left hand / controller. 不支援Not Supported 支援Supported 支援Supported
SpatialInteractionSourceState::IsSelectPressedSpatialInteractionSourceState::IsSelectPressed 主要按鈕的目前狀態。Current state of the primary button. 分流Air Tap 觸發程序Trigger 寬鬆的空中點 (垂直縮小) Relaxed Air Tap (upright pinch)
SpatialInteractionSourceState::IsGraspedSpatialInteractionSourceState::IsGrasped 抓取按鈕的目前狀態。Current state of the grab button. 不支援Not Supported 抓取按鈕Grab button 縮小或封閉手勢Pinch or Closed Hand
SpatialInteractionSourceState::IsMenuPressedSpatialInteractionSourceState::IsMenuPressed 功能表按鈕的目前狀態。Current state of the menu button. 不支援Not Supported 功能表按鈕Menu Button 不支援Not Supported
SpatialInteractionSourceLocation::PositionSpatialInteractionSourceLocation::Position 控制器上手或把手位置的 XYZ 位置。XYZ location of the hand or grip position on the controller. 掌上位置Palm location 握住姿勢位置Grip pose position 掌上位置Palm location
SpatialInteractionSourceLocation::取向SpatialInteractionSourceLocation::Orientation 四元數,代表控制器上的手或底姿勢的方向。Quaternion representing the orientation of the hand or grip pose on the controller. 不支援Not Supported 握住姿勢方向Grip pose orientation 掌上方向Palm orientation
SpatialPointerInteractionSourcePose::PositionSpatialPointerInteractionSourcePose::Position 指標光線的原點。Origin of the pointing ray. 不支援Not Supported 支援Supported 支援Supported
SpatialPointerInteractionSourcePose::ForwardDirectionSpatialPointerInteractionSourcePose::ForwardDirection 指標光線的方向。Direction of the pointing ray. 不支援Not Supported 支援Supported 支援Supported

有些以上的屬性在所有裝置上都無法使用,而且 API 提供了測試此功能的方法。Some of the above properties aren't available on all devices, and the API provides a means to test for this. 例如,您可以檢查 SpatialInteractionSource:: IsGraspSupported 屬性,以判斷來源是否提供理解動作。For example, you can inspect the SpatialInteractionSource::IsGraspSupported property to determine whether the source provides a grasp action.

底姿勢與指標姿勢Grip pose vs. pointing pose

Windows Mixed Reality 支援不同外型規格的移動控制器。Windows Mixed Reality supports motion controllers in different form factors. 它也支援已明確表述的手追蹤系統。It also supports articulated hand tracking systems. 這些系統中的所有系統都有不同的關聯性,也就是應用程式在使用者手中用來指向或呈現物件的自然「向前」方向之間的關聯性。All of these systems have different relationships between the hand position and the natural "forward" direction that apps should use for pointing or rendering objects in the user's hand. 為了支援上述所有動作,有兩種類型的3D 表示提供給手追蹤和移動控制器。To support all of this, there are two types of 3D poses provided for both hand tracking and motion controllers. 第一個是代表使用者手位置的底框姿勢。The first is grip pose, which represents the user's hand position. 第二個是指向姿勢,代表源自于使用者手或控制器的指標光線。The second is pointing pose, which represents a pointing ray originating from the user's hand or controller. 因此,如果您想要轉譯 使用者的手保留在使用者手中的物件(例如寶劍或機槍),請使用底框姿勢。So, if you want to render the user's hand or an object held in the user's hand, such as a sword or gun, use the grip pose. 如果您想要從控制器或手中 raycast (例如,當使用者是 * * 指向 UI 時),請使用指標姿勢。If you want to raycast from the controller or hand, for example when the user is **pointing at UI, use the pointing pose.

您可以透過 SpatialInteractionSourceState::P 屬性 r) :: TryGetLocation ( ... )來存取此框 姿勢。其定義如下:You can access the grip pose through SpatialInteractionSourceState::Properties::TryGetLocation(...). It's defined as follows:

  • 把手 位置:自然地按住控制器時的棕櫚距心,向左或向右調整以將位置置中置中。The grip position: The palm centroid when holding the controller naturally, adjusted left or right to center the position within the grip.
  • 圖方向的右軸:當您完全開啟手來形成平面的5形姿勢時,您的掌上光 (的光線會從左至右向前復原,從右邊的棕櫚) The grip orientation's Right axis: When you completely open your hand to form a flat 5-finger pose, the ray that is normal to your palm (forward from left palm, backward from right palm)
  • 底圖 方向的向前軸:當您關閉手部分 (時,如同按住控制器) 一樣,也就是由非拇指手指所形成的電子管「轉寄」的光線。The grip orientation's Forward axis: When you close your hand partially (as if holding the controller), the ray that points "forward" through the tube formed by your non-thumb fingers.
  • 底圖 方向的向上軸:右邊和向前定義所隱含的向上軸。The grip orientation's Up axis: The Up axis implied by the Right and Forward definitions.

您可以透過 SpatialInteractionSourceState::P 屬性 r) :: TryGetLocation ( ... ) :: SourcePointerPoseSpatialInteractionSourceState:: TryGetPointerPose ( ... ) :: TryGetInteractionSourcePose來存取 指標姿勢You can access the pointer pose through SpatialInteractionSourceState::Properties::TryGetLocation(...)::SourcePointerPose or SpatialInteractionSourceState::TryGetPointerPose(...)::TryGetInteractionSourcePose.

控制器特定的輸入屬性Controller-specific input properties

針對控制器,SpatialInteractionSource 具有具有額外功能的控制器屬性。For controllers, the SpatialInteractionSource has a Controller property with additional capabilities.

  • HasThumbstick: 若為 true,則控制器具有操縱杆。HasThumbstick: If true, the controller has a thumbstick. 檢查 SpatialInteractionSourceState 的 ControllerProperties 屬性,以取得 (ThumbstickX 和 ThumbstickY) 的操縱杆 x 和 y 值,以及其按下的狀態 (IsThumbstickPressed) 。Inspect the ControllerProperties property of the SpatialInteractionSourceState to acquire the thumbstick x and y values (ThumbstickX and ThumbstickY), as well as its pressed state (IsThumbstickPressed).
  • HasTouchpad: 若為 true,表示控制器有一個觸控板。HasTouchpad: If true, the controller has a touchpad. 檢查 SpatialInteractionSourceState 的 ControllerProperties 屬性,以取得 (TouchpadX 和 TouchpadY) 的 [觸控軸 x] 和 [y] 值,並知道使用者是否觸及板 (IsTouchpadTouched) ,以及是否按下觸控板, (IsTouchpadPressed) 。Inspect the ControllerProperties property of the SpatialInteractionSourceState to acquire the touchpad x and y values (TouchpadX and TouchpadY), and to know if the user is touching the pad (IsTouchpadTouched) and if they're pressing the touchpad down (IsTouchpadPressed).
  • SimpleHapticsController: 控制器的 SimpleHapticsController API 可讓您檢查控制器的 haptics 功能,也可讓您控制 haptic 的意見反應。SimpleHapticsController: The SimpleHapticsController API for the controller allows you to inspect the haptics capabilities of the controller, and it also allows you to control haptic feedback.

觸控板和操縱杆的範圍為-1 到1,兩個軸 (從下到上,以及從左至右) 。The range for touchpad and thumbstick is -1 to 1 for both axes (from bottom to top, and from left to right). 使用 SpatialInteractionSourceState:: SelectPressedValue 屬性存取的類比觸發程式範圍,範圍為0到1。The range for the analog trigger, which is accessed using the SpatialInteractionSourceState::SelectPressedValue property, has a range of 0 to 1. 值1與 IsSelectPressed 等於 true 的關聯性;任何其他值與 IsSelectPressed 的關聯性等於 false。A value of 1 correlates with IsSelectPressed being equal to true; any other value correlates with IsSelectPressed being equal to false.

清楚的手追蹤Articulated hand tracking

Windows Mixed Reality API 會提供對已進行的明確追蹤支援,例如 HoloLens 2。The Windows Mixed Reality API provides full support for articulated hand tracking, for example on HoloLens 2. 明確的手追蹤可用來在您的應用程式中執行直接操作和點認可輸入模型。Articulated hand tracking can be used to implement direct manipulation and point-and-commit input models in your applications. 它也可以用來撰寫完全自訂的互動。It can also be used to author fully custom interactions.

手形基本架構Hand skeleton

明確說明的手追蹤提供25個聯合基本架構,可啟用許多不同類型的互動。Articulated hand tracking provides a 25 joint skeleton that enables many different types of interactions. 此基本架構提供索引/中間/環形/小手指的五個接點、適用于 thumb 的四個接點,以及一個手腕的接點。The skeleton provides five joints for the index/middle/ring/little fingers, four joints for the thumb, and one wrist joint. 手腕聯合會作為階層的基礎。The wrist joint serves as the base of the hierarchy. 下圖說明基本架構的版面配置。The following picture illustrates the layout of the skeleton.

手形基本架構

在大部分的情況下,每個聯合都會根據它所代表的骨骼來命名。In most cases, each joint is named based on the bone that it represents. 因為每個聯合都有兩個骨骼,所以我們會根據該位置的子骨骼,使用命名每個聯合的慣例。Since there are two bones at every joint, we use a convention of naming each joint based on the child bone at that location. 子骨骼是從手腕進一步定義為骨骼。The child bone is defined as the bone further from the wrist. 例如,"Index 近端" 聯合包含 Index 近端骨骼的開始位置,以及該骨骼的方向。For example, the "Index Proximal" joint contains the beginning position of the index proximal bone, and the orientation of that bone. 它不包含骨骼的結束位置。It doesn't contain the ending position of the bone. 如果需要,您可以從階層中的下一個聯合(「索引中繼」聯合)取得它。If you need that, you'd get it from the next joint in the hierarchy, the "Index Intermediate" joint.

除了25個階層式接點之外,系統也提供了一種掌上接點。In addition to the 25 hierarchical joints, the system provides a palm joint. 手掌通常不會被視為框架結構的一部分。The palm isn't typically considered part of the skeletal structure. 它只是為了方便您取得手的整體位置和方向。It's provided only as a convenient way to get the hand's overall position and orientation.

每個聯合都會提供下列資訊:The following information is provided for each joint:

名稱Name 描述Description
位置Position 聯合的3D 位置,適用于任何要求的座標系統。3D position of the joint, available in any requested coordinate system.
OrientationOrientation 適用于骨骼的3D 方向,適用于任何要求的座標系統。3D orientation of the bone, available in any requested coordinate system.
半徑Radius 面板在聯合位置的距離。Distance to surface of the skin at the joint position. 適用于微調依賴手指寬度的直接互動或視覺效果。Useful for tuning direct interactions or visualizations that rely on finger width.
精確度Accuracy 提供提示,告訴您系統如何確保系統對這種聯合資訊的感覺。Provides a hint on how confident the system feels about this joint's information.

您可以透過 SpatialInteractionSourceState上的函式來存取手形基本架構資料。You can access the hand skeleton data through a function on the SpatialInteractionSourceState. 函數稱為 TryGetHandPose,它會傳回名為 HandPose的物件。The function is called TryGetHandPose, and it returns an object called HandPose. 如果來源不支援明確的手,則此函式會傳回 null。If the source doesn't support articulated hands, then this function will return null. 一旦有了 HandPose,您就可以使用您感興趣的聯合名稱來呼叫 TryGetJoint,以取得目前的聯合資料。Once you have a HandPose, you can get current joint data by calling TryGetJoint, with the name of the joint you're interested in. 資料會以 JointPose 結構的形式傳回。The data is returned as a JointPose structure. 下列程式碼會取得索引手指提示的位置。The following code gets the position of the index finger tip. 變數 currentState 代表 SpatialInteractionSourceState的實例。The variable currentState represents an instance of SpatialInteractionSourceState.

using namespace winrt::Windows::Perception::People;
using namespace winrt::Windows::Foundation::Numerics;

auto handPose = currentState.TryGetHandPose();
if (handPose)
{
    JointPose joint;
    if (handPose.TryGetJoint(desiredCoordinateSystem, HandJointKind::IndexTip, joint))
    {
        float3 indexTipPosition = joint.Position;

        // Do something with the index tip position
    }
}

手上網格Hand mesh

明確 deformable 的手追蹤 API 允許完全三角形的手網格。The articulated hand tracking API allows for a fully deformable triangle hand mesh. 此網狀架構可以即時 deform,以及適用于視覺效果和先進的物理技術。This mesh can deform in real time along with the hand skeleton, and is useful for visualization and advanced physics techniques. 若要存取手形網格,您必須先在SpatialInteractionSource上呼叫TryCreateHandMeshObserverAsync來建立HandMeshObserver物件。To access the hand mesh, you need to first create a HandMeshObserver object by calling TryCreateHandMeshObserverAsync on the SpatialInteractionSource. 這只需要針對每個來源執行一次,通常是您第一次看到它時。This only needs to be done once per source, typically the first time you see it. 這表示您會呼叫此函式,以在每次有手進入 FOV 時建立 HandMeshObserver 物件。That means you'll call this function to create a HandMeshObserver object whenever a hand enters the FOV. 這是非同步函式,因此您必須在這裡處理一些並行處理。This is an async function, so you'll have to deal with a bit of concurrency here. 一旦可以使用,您就可以呼叫 GetTriangleIndices,向 HandMeshObserver 物件要求三角形索引緩衝區。Once available, you can ask the HandMeshObserver object for the triangle index buffer by calling GetTriangleIndices. 索引不會變更框架的框架,因此您可以取得這一次,並在來源的存留期內加以快取。Indices don't change frame over frame, so you can get those once and cache them for the lifetime of the source. 以順時針的纏繞順序提供索引。Indices are provided in clockwise winding order.

下列程式碼會啟動卸離的 std:: thread 以建立網格觀察器,並在有可用的網格觀察者時,將索引緩衝區解壓縮。The following code spins up a detached std::thread to create the mesh observer and extracts the index buffer once the mesh observer is available. 它會從名為 currentState 的變數開始,也就是代表追蹤手的 SpatialInteractionSourceState 實例。It starts from a variable called currentState, which is an instance of SpatialInteractionSourceState representing a tracked hand.

using namespace Windows::Perception::People;

std::thread createObserverThread([this, currentState]()
{
    HandMeshObserver newHandMeshObserver = currentState.Source().TryCreateHandMeshObserverAsync().get();
    if (newHandMeshObserver)
    {
        unsigned indexCount = newHandMeshObserver.TriangleIndexCount();
        vector<unsigned short> indices(indexCount);
        newHandMeshObserver.GetTriangleIndices(indices);

        // Save the indices and handMeshObserver for later use - and use a mutex to synchronize access if needed!
     }
});
createObserverThread.detach();

啟動卸離的執行緒只是一個處理非同步呼叫的選項。Starting a detached thread is just one option for handling async calls. 或者,您可以使用 c + + 所支援的新 co_await 功能/winrtAlternatively, you could use the new co_await functionality supported by C++/WinRT.

有了 HandMeshObserver 物件之後,您應該在其對應的 SpatialInteractionSource 作用中的持續時間內保存該物件。Once you have a HandMeshObserver object, you should hold onto it for the duration that its corresponding SpatialInteractionSource is active. 然後,您可以藉由呼叫 GetVertexStateForPose 並傳入代表您想要頂點之姿勢的 HandPose 實例,要求它提供最新的頂點緩衝區來代表手。Then each frame, you can ask it for the latest vertex buffer that represents the hand by calling GetVertexStateForPose and passing in a HandPose instance that represents the pose that you want vertices for. 緩衝區中的每個頂點都有位置和一般。Each vertex in the buffer has a position and a normal. 以下是如何取得手中目前頂點集合的範例。Here's an example of how to get the current set of vertices for a hand mesh. 如同之前一樣, currentState 變數代表 SpatialInteractionSourceState的實例。As before, the currentState variable represents an instance of SpatialInteractionSourceState.

using namespace winrt::Windows::Perception::People;

auto handPose = currentState.TryGetHandPose();
if (handPose)
{
    std::vector<HandMeshVertex> vertices(handMeshObserver.VertexCount());
    auto vertexState = handMeshObserver.GetVertexStateForPose(handPose);
    vertexState.GetVertices(vertices);

    auto meshTransform = vertexState.CoordinateSystem().TryGetTransformTo(desiredCoordinateSystem);
    if (meshTransform != nullptr)
    {
        // Do something with the vertices and mesh transform, along with the indices that you saved earlier
    }
}

相對於基本架構接點,手形 API 不允許您指定頂點的座標系統。In contrast to skeleton joints, the hand mesh API doesn't allow you to specify a coordinate system for the vertices. 相反地, HandMeshVertexState 會指定在中提供頂點的座標系統。Instead, the HandMeshVertexState specifies the coordinate system that the vertices are provided in. 然後您可以藉由呼叫 TryGetTransformTo 並指定您想要的座標系統,來取得網格轉換。You can then get a mesh transform by calling TryGetTransformTo and specifying the coordinate system you want. 每當您使用頂點時,都必須使用這個網格轉換。You'll need to use this mesh transform whenever you work with the vertices. 這種方法可減少 CPU 額外負荷,尤其是當您只是為了轉譯而使用網格時更是如此。This approach reduces CPU overhead, especially if you're only using the mesh for rendering purposes.

注視和認可複合手勢Gaze and Commit composite gestures

針對使用「注視認可」輸入模型的應用程式,特別是在 HoloLens (第一代) 上,空間輸入 API 會提供選擇性的 SpatialGestureRecognizer ,可用來啟用以「選取」事件為基礎的複合手勢。For applications using the gaze-and-commit input model, particularly on HoloLens (first gen), the Spatial Input API provides an optional SpatialGestureRecognizer that can be used to enable composite gestures built on top of the 'select' event. 藉由將互動從 SpatialInteractionManager 路由傳送至全像 SpatialGestureRecognizer,應用程式可以跨手、語音和空間輸入裝置,以一致的方式來偵測點擊、保存、操作和導覽事件,而不需要手動處理按下和釋出。By routing interactions from the SpatialInteractionManager to a hologram's SpatialGestureRecognizer, apps can detect Tap, Hold, Manipulation, and Navigation events uniformly across hands, voice, and spatial input devices, without having to handle presses and releases manually.

SpatialGestureRecognizer 只會在您要求的手勢集合之間進行最短的混淆。SpatialGestureRecognizer does only the minimal disambiguation between the set of gestures that you request. 例如,如果您只要求點一下,使用者可能會在想要的時間點按下手指,且仍然會出現點一下。For example, if you request just Tap, the user may hold their finger down as long as they like and a Tap will still occur. 如果您要求點一下和按住,則在大約第二個按住手指時,手勢會升階為按住,且不會再進行點碰。If you request both Tap and Hold, after about a second of holding down their finger the gesture will promote to a Hold and a Tap will no longer occur.

若要使用 SpatialGestureRecognizer,請處理 SpatialInteractionManager 的 InteractionDetected 事件,並抓取該處公開的 SpatialPointerPose。To use SpatialGestureRecognizer, handle the SpatialInteractionManager's InteractionDetected event and grab the SpatialPointerPose exposed there. 使用這個姿勢的使用者前端光線來與使用者的環境內的全像影像和介面網格交集,以判斷使用者想要與之互動。Use the user's head gaze ray from this pose to intersect with the holograms and surface meshes in the user's surroundings to determine what the user is intending to interact with. 然後,使用其 CaptureInteraction 方法,將事件引數中的 SpatialInteraction 路由傳送至目標全像 SpatialGestureRecognizer。Then, route the SpatialInteraction in the event arguments to the target hologram's SpatialGestureRecognizer, using its CaptureInteraction method. 這會根據在建立時或由TrySetGestureSettings在該辨識器上設定的SpatialGestureSettings ,開始解讀該互動。This starts interpreting that interaction according to the SpatialGestureSettings set on that recognizer at creation time - or by TrySetGestureSettings.

在 HoloLens (第一代) 上,互動和手勢應衍生自使用者的標頭,而不是在手邊的位置呈現或互動。On HoloLens (first gen), interactions and gestures should derive their targeting from the user's head gaze, rather than rendering or interacting at the hand's location. 一旦互動開始之後,就可以使用該手的相對運動來控制手勢,如同操作或導覽手勢一樣。Once an interaction has started, relative motions of the hand may be used to control the gesture, as with the Manipulation or Navigation gesture.

另請參閱See also