手部指向和行動Point and commit with hands

游標

手部指向和行動是輸入模型,可讓使用者鎖定目標、選取和操控可及範圍之外的 2D 與 3D 內容。Point and commit with hands is an input model that lets users target, select, and manipulate out of reach 2D and 3D content. 「遠方 (far)」互動技術是混合實境特有的技術,因為人類不會那樣與真實世界自然互動。This "far" interaction technique is unique to mixed reality because humans don't naturally interact with the real world that way. 例如,在超級英雄電影「X 戰警」中,萬磁王可以使用他的手操控遠方物件。For example, in the super hero movie, X-Men, the character Magneto can manipulate far objects in the distance with his hands. 這不是人類在現實中可做到的事。This isn't something humans can do in reality. 在 HoloLens (AR) 和混合實境 (MR) 中,我們會為使用者提供這項神奇功能,以中斷真實世界的實體限制。In both HoloLens (AR) and Mixed Reality (MR), we equip users with this magical power to break the physical constraint of the real world. 這不僅是很有趣的全像攝影體驗,還能讓使用者互動更有效且效率更高。Not only is it a fun holographic experience, but it also makes user interactions more effective and efficient.

裝置支援Device support

輸入模型Input model HoloLens (第 1 代)HoloLens (1st gen) HoloLens 2HoloLens 2 沉浸式頭戴裝置Immersive headsets
手部指向和行動Point and commit with hands ❌ 未支援❌ Not supported ✔️ 建議使用✔️ Recommended ✔️ 建議使用✔️ Recommended

「指向和行動」是使用新式關節手部追蹤系統的新功能之一。"Point and commit with hands" is one of the new features that use the new articulated hand-tracking system. 使用運動控制器的沈浸式耳機也是以此輸入模型作為主要輸入模型。This input model is also the primary input model on immersive headsets by using motion controllers.



手部光線 (Hand Ray)Hand rays

在 HoloLens 2 上,我們已建立可從使用者手掌中央發射的手部光線。On HoloLens 2, we created a hand ray that shoots out from the center of the user's palm. 此光線將視為手的延伸。This ray is treated as an extension of the hand. 光線末端會附加環狀游標,以指出光線與目標物件交集的位置。A donut-shaped cursor is attached to the end of the ray to indicate the location where the ray intersects with a target object. 然後,游標所在的物件就能從手部接收手勢命令。The object that the cursor lands on can then receive gestural commands from the hand.

此基本手勢命令會藉由使用拇指與食指進行空中點選動作來觸發。This basic gestural command is triggered by using the thumb and index finger to do the air-tap action. 藉由使用手部光線進行指向及空中點選來行動,使用者就可以啟用按鈕或超連結。By using the hand ray to point and air tap to commit, users can activate a button or a hyperlink. 藉由更多的複合手勢,使用者可以從遠處瀏覽網頁內容和操作 3D 物件。With more composite gestures, users can navigating web content and manipulating 3D objects from a distance. 手動光線的視覺化設計也會反應這些指向和行動狀態,請見下列圖示和說明:The visual design of the hand ray should also react to these point and commit states, as described and shown below:

手部光線指向hand rays pointing
指向狀態Pointing state
在「指向」 狀態時,光線呈虛線,而指標呈環狀。In the pointing state, the ray is a dash line and the cursor is a donut shape.

手部光線認可hand rays commit
認可狀態Commit state
在「行動」 狀態時,光線變成一條實線,而游標會壓縮成一個點。In the commit state, the ray turns into a solid line and the cursor shrinks to a dot.



遠近轉換Transition between near and far

我們不會使用「以食指指向」之類的特定手勢來指引光線,而是將光線設計成從使用者的手掌中心射出來。Instead of using specific gestures like "pointing with index finger" to direct the ray, we designed the ray to comout out from the center of the users' palm. 如此一來,我們空出並保留了五隻手指頭來進行更具操控性的手勢,例如捏取和抓取。This way, we've released and reserved the Five Fingers for more manipulative gestures like pinch and grab. 透過這項設計,我們只需建立一個心智模型 - 針對遠近互動使用完全相同的一組手勢。With this design, we create only one mental model - the same set of hand gestures are used for both near and far interaction. 您可以使用相同的抓取手勢來操作不同距離的物件。You can use the same grab gesture to manipulate objects at different distances. 光線的引動過程會根據鄰近程度自動執行,如下所示:The invocation of the rays is automatic and proximity-based as follows:

近距操作Near manipulation
近距操作Near manipulation
當物件位在手臂可及的範圍內 (大約 50 公分) 時,光線就會自動關閉,並建議您進行近距離互動。When an object is within arm's length (roughly 50 cm), the rays are turned off automatically, encouraging near interaction.

遠距操作Far manipulation
遠距操作Far manipulation
物件距離超過 50 公分時,光線就會開啟。When the object is farther than 50 cm, the rays are turned on. 此轉換會很順利且流暢。The transition should be smooth and seamless.



2D 平板互動2D slate interaction

2D 平板是裝載 2D 應用程式內容 (例如網頁瀏覽器) 的全像攝影容器。A 2D slate is a holographic container hosting 2D app contents, such as a web browser. 以 2D 平板進行遠端互動的設計概念是,使用手部光線來鎖定目標和使用空中點選來選取項目。The design concept for far interacting with a 2D slate is to use hand rays to target and air tap to select. 以手部光線鎖定目標後,使用者可以透過空中點選來觸發超連結或按鈕。After targeting with a hand ray, users can air tap to trigger a hyperlink or a button. 他們可使用一隻手來進行「空中點選和拖曳」,進而上下捲動平板內容。They can use one hand to "air tap and drag" to scroll slate content up and down. 使用兩隻手進行空中點選和拖曳的相對動作則可縮放平板內容。The relative motion of using two hands to air tap and drag can zoom in and out the slate content.

將手部光線的目標鎖定在角落和邊緣會顯示最接近的操作能供性 (affordance)。Targeting the hand ray at the corners and edges reveals the closest manipulation affordance. 藉由「抓取並拖曳」操作能供性,使用者可以透過角落能供性進行統一的尺寸調整,以及可透過邊緣能供性來重排平板。By "grab and drag" manipulation affordances, users can do uniform scaling through the corner affordances, and can reflow the slate via the edge affordances. 藉由抓取並拖曳 2D 平板上方的 holobar,使用者可以移動整個平板。Grabbing and dragging the holobar at the top of the 2D slate lets users move the entire slate.

2D 平板互動按一下2d slate interaction click
按一下Click

2D 平板互動捲動2d slate interaction scroll
捲動Scroll

2D 平板互動縮放2d slate interaction zoom
縮放Zoom


操作 2D 平板For manipulating the 2D slate

  • 使用者將手部光線指向角落或邊緣時,可顯示最接近的操作能供性。Users point the hand ray at the corners or edges to reveal the closest manipulation affordance.
  • 藉由在能供性上套用操作手勢,使用者可以透過角落能供性進行統一的尺寸調整,以及可透過邊緣能供性來重排平板。By applying a manipulation gesture on the affordance, users can do uniform scaling through the corner affordance, and can reflow the slate via the edge affordance.
  • 藉由在 2D 平板頂端的 holobar 上套用操作手勢,使用者可以移動整個平板。By applying a manipulation gesture on the holobar at the top of the 2D slate, users can move the entire slate.


3D 物件操作3D object manipulation

在直接操作中,有兩種方法可讓使用者操作 3D 物件:能供性操作與非能供性操作。In direct manipulation, there are two ways for users to manipulate 3D objects: affordance-based manipulation and non-affordance based manipulation. 在指向和行動模型中,使用者可以透過手部射線來完成完全相同的工作。In the point and commit model, users can achieve exactly the same tasks through the hand rays. 不需要額外的學習。No extra learning is needed.

能供性操作Affordance-based manipulation

使用者使用手部光線來指向並顯示週框方塊和操作能供性。Users use hand rays to point and reveal the bounding box and manipulation affordances. 使用者可以將操作手勢套用到週框方塊來移動整個物件,套用到邊緣能供性可旋轉物件,套用到角落能供性可統一調整物件大小。Users can apply the manipulation gesture on the bounding box to move the whole object, on the edge affordances to rotate, and on the corner affordances to scale uniformly.

3D 物件操作遠距移動3d object manipulation far move
移動Move

3D 物件操作遠距旋轉3d object manipulation far rotate
旋轉Rotate

3D 物件操作遠距縮放3d object manipulation far scale
縮放Scale

非能供性操作Non-affordance-based manipulation

使用者透過手部光線指向來顯示週框方塊,然後直接在其上方套用操作手勢。Users point with hand rays to reveal the bounding box then directly apply manipulation gestures on it. 若使用一隻手,物件的轉譯和旋轉會與手的動作和方向相關聯。With one hand, the translation and rotation of the object are associated to motion and orientation of the hand. 若使用兩隻手,使用者可以根據兩隻手的相對動作來轉譯、縮放及旋轉週框方塊。With two hands, users can translate, scale, and rotate it according to relative motions of two hands.



本能手勢Instinctual gestures

指向和行動的本能手勢基本上類似於手部直接操作The concept of instinctual gestures for point and commit is similar to that for direct manipulation with hands. 使用者在 3D 物件上進行的手勢,皆由 UI 能供性的設計來引導。The gestures users do on a 3D object are guided by the design of UI affordances. 例如,在使用者想使用五隻手指頭來抓取較大的物件時,小型控點可能會促使使用者運用其拇指與食指來捏取物件。For example, a small control point might motivate users to pinch with their thumb and index finger, while a user might want to use all Five Fingers to grab a larger object.

本能手勢遠距小型物件instinctual gestures far small object
小型物件Small object

本能手勢遠距中型物件instinctual gestures far medium object
中型物件Medium object

本能手勢遠距大型物件instinctual gestures far large object
大型物件Large object



手部與 6 DoF 控制器之間的對稱設計Symmetric design between hands and 6 DoF controller

針對混合實境入口網站 (MRP),已建立並定義了遠方互動的指向和行動概念。The concept of point and commit for far interaction was created and defined for the Mixed Reality Portal (MRP). 在此案例中,使用者會戴上沉浸式頭戴裝置,並透過動作控制器與 3D 物件互動。In this scenario, a user wears an immersive headset and interacts with 3D objects via motion controllers. 運動控制器會發射光線來指向及操作遠方物件。The motion controllers shoot out rays for pointing and manipulating far objects. 控制器上有按鈕,可用來進一步執行不同動作。There are buttons on the controllers for further committing different actions. 我們會套用射線的互動模型,並將其連結至雙手。We apply the interaction model of rays and attached them to both hands. 透過此對稱設計,熟悉 MRP 的使用者在使用 HoloLens 2 時,不需要學習另一個用於遠端指向及操作的互動模型,反之亦然。With this symmetric design, users who are familiar with MRP won't need to learn another interaction model for far pointing and manipulation when they use HoloLens 2, and the other way around.

使用控制器的光線對稱式設計symmetric design for rays with controllers
控制器光線Controller rays

使用手部的光線對稱式設計symmetric design for rays with hands
手部光線Hand rays



MRTK (混合實境工具組) 中適用於 Unity 的手部射線Hand ray in MRTK (Mixed Reality Toolkit) for Unity

根據預設,MRTK 會提供一個手部射線 prefab (DefaultControllerPointer.prefab),其與命令介面的系統手部射線具有相同視覺狀態。By default, MRTK provides a hand ray prefab (DefaultControllerPointer.prefab) which has the same visual state as the shell's system hand ray. 其是在 MRTK 輸入設定檔的 [指標] 下進行指派。It's assigned in MRTK's Input profile, under Pointers. 在沉浸式頭戴裝置中,相同的光線會用於運動控制器。In an immersive headset, the same rays are used for the motion controllers.


請參閱See also