本能交互简介Introducing instinctual interactions

用手进行远操作

实现简单的本能交互这一理念贯穿整个混合现实 (MR) 平台。The philosophy of simple, instinctual interactions is interwoven throughout the mixed reality (MR) platform. 我们采取了三个步骤来确保应用程序设计人员和开发人员能够为其客户提供简单直观的交互。We've taken three steps to ensure that application designers and developers can provide their customers with easy and intuitive interactions.

首先,已确保将传感器和输入技术融合到多模式交互模型中。First, we've made sure our sensors and input technologies combine into multimodal interaction models. 这些交互模型包含手眼跟踪和自然语言输入。These interaction models include hand and eye tracking along with natural language input. 基于在多模框架中的研究、设计和开发,而不是单个输入,是建立本能体验的关键所在。Based on our research, designing and developing within a multimodal framework (and not based on individual inputs) is the key to creating instinctual experiences.

其次,我们已认识到,许多开发人员以多种 HoloLens 设备(例如 HoloLens 2 和 HoloLens 第 1 代)或 HoloLens 和 VR 为目标。Second, we recognize that many developers target multiple HoloLens devices, such as HoloLens 2 and HoloLens (1st gen) or HoloLens and VR. 因此,我们设计了可跨设备工作的交互模型,即使每个设备上的输入技术不同。So we've designed our interaction models to work across devices, even if the input technology varies on each device. 例如,具有 6DoF 控制器的 Windows 沉浸式头戴显示设备和 HoloLens 2 上的远程交互都使用相同的可视线索和模式。For example, far interaction on a Windows Immersive headset with a 6DoF controller and HoloLens 2 both use identical affordances and patterns. 这使得跨设备应用程序开发变得简单,并为用户提供自然的交互体验。This makes it easy for cross-device application development and provides a natural feel to user interactions.

虽然我们认识到 MR 中有成千上万种有效且吸引力十足的神奇交互,但是我们发现有意地在应用程序中衔接使用单个交互模型是确保用户成功并获得良好体验的最佳方法。While we recognize that there are thousands of effective, engaging, and magical interactions possible in MR, we've found that intentionally employing a single interaction model in an application is the best way to ensure users are successful and have a great experience. 为此,我们在交互指南中提供了三项内容:To that end, we've included three things in this interaction guidance:

  • 围绕三个主要交互模型以及每个模型所需的组件和模式的特定指南。Specific guidance around the three primary interaction models and the components and patterns required for each.
  • 有关平台提供的其他优势的补充指南。Supplemental guidance about other benefits that our platform provides.
  • 常规指南,用于选择适合开发方案的交互模型。General guidance to help select the appropriate interaction model for your development scenario.

多模式交互模型Multimodal interaction models

根据我们的研究以及与客户的反馈,我们发现,有三个主要交互模型适合大多数混合现实体验。Based on our research and feedback from customers, we've discovered that three primary interaction models suit most mixed reality experiences. 在许多方面,交互模型是用户的心理模型,旨在阐述如何完成某个工作流。In many ways, the interaction model is the user's mental model for how to complete a workflow. 每个交互模型都针对一组客户需求进行了优化,便于使用、功能强大且用途广泛(在正确使用的前提下)。Each of these interaction models is optimized for a set of customer needs and is convenient, powerful, and usable when used correctly.

下图是简化概述。The chart below is a simplified overview. 本页下方通过链接提供了有关使用每个交互模型的详细信息以及相关图像和代码示例。Detailed information for using each interaction model is linked in the pages below with images and code samples.


ModelModel 示例方案Example scenarios 适合Fit 硬件Hardware
手和运动控制器Hands and motion controllers 3D 空间体验,例如空间布局和设计、内容操作或模拟。3D spatial experiences, such as spatial layout and design, content manipulation, or simulation. 非常适合新用户,可与语音、眼动跟踪或头部凝视结合使用。Great for new users coupled with voice, eye tracking or head gaze. 可轻易掌握。Low learning curve. 跨手部跟踪和 6DoF 控制器使用一致的 UX 。Consistent UX across hand tracking and 6DoF controllers. HoloLens 2HoloLens 2
沉浸式头戴显示设备Immersive headsets
免动手操作Hands-free 用户双手不空时的情境体验,例如在职学习和维护。Contextual experiences where a user's hands are occupied, such as on-the-job learning and maintenance. 必需进行一些学习。Some learning required. 如果无法用手,设备可以配合使用语音和自然语言。If hands are unavailable, the device pairs well with voice and natural language. HoloLens 2HoloLens 2
HoloLens(第 1 代)HoloLens (1st gen)
沉浸式头戴显示设备Immersive headsets
凝视和提交Gaze and commit 点入浏览体验(例如,3D 演示文稿、演示)。Click-through experiences, for example, 3D presentations, demos. 需要接受有关 HMD 的培训,但无需进行有关移动设备的培训。Requires training on HMDs but not on mobile. 最适合可访问控制器。Best for accessible controllers. 最适合 HoloLens(第 1 代)。Best for HoloLens (1st gen). HoloLens 2HoloLens 2
HoloLens(第 1 代)HoloLens (1st gen)
沉浸式头戴显示设备Immersive headsets
移动 ARMobile AR

为了避免让用户的交互体验产生差距,最好是从头到尾遵循单个模型的指南。To avoid gaps in the user interaction experience, it's best to follow the guidance for a single model from beginning to end.

以下部分将会介绍选择和实现其中一种交互模型的步骤。The sections below walk through the steps for selecting and implementing one of these interaction models.

查看完本页后,你将了解有关以下方面的指南:By the end of this page, you'll understand our guidance on:

  • 为你的客户选择交互模型Choosing an interaction model for your customer
  • 实现交互模型Implementing the interaction model
  • 在交互模型之间转换Transitioning between interaction models
  • 设计后续步骤Design next steps

为你的客户选择交互模型Choose an interaction model for your customer

通常,开发人员和创作者已经全盘考虑了其客户可能使用的交互类型。Typically, developers and creators have thought through the types of interactions that their customers can have. 为了鼓励实现以客户为中心的设计,建议遵循以下指导选择针对客户进行了优化的交互模型。To encourage a customer-focused approach to design, we recommend the following guidance for selecting the interaction model that's optimized for your customer.

为什么要遵循以下指南?Why follow this guidance?

  • 我们基于客观和主观标准(包括身体和认知努力、直觉和学习能力)测试了交互模型。We test our interaction models for objective and subjective criteria, including physical and cognitive effort, intuitiveness, and learnability.
  • 由于交互不同,各交互模型的视觉/音频元素以及对象行为也可能不同。Because interactions differ, visual/audio affordances and object behavior might differ between interaction models.
  • 将多个交互模型的各个部分组合在一起可能会产生竞争性视觉元素的风险,例如同时出现手部射线和头部凝视光标,Combining parts of multiple interaction models creates the risk of competing affordances, such as simultaneous hand rays and a head-gaze cursor. 这可能会使用户感到不知所措和混淆。This can overwhelm and confuse users.

以下是一些如何针对每种交互模型优化可视线索和行为的示例。Here are some examples of how affordances and behaviors are optimized for each interaction model. 我们经常看到新用户提出类似的问题,例如“我怎样才能知道系统是否在正常工作”、“我怎样才能知道我可以做什么”,以及“我怎样才能知道它是否理解我刚刚做了什么?” We often see new users have similar questions, such as "how do I know the system is working", "how do I know what I can do", and "how do I know if it understood what I just did?"


ModelModel 我怎样才能知道它是否在正常工作?How do I know it's working? 我怎样才能知道我可以做什么?How do I know what I can do? 我怎样才能知道刚才的操作?How do I know what I just did?
手和运动控制器Hands and motion controllers 我看到一个手部网格,以及一个指尖视觉元素或手/控制器射线。I see a hand mesh, a fingertip affordance, or hand/controller rays. 当我的手靠近对象时,我看到可抓握的手柄或边界框出现。I see grabbable handles, or a bounding box appears when my hand is near an object. 我听到有声音调,看到抓取和释放的动画。I hear audible tones and see animations on grab and release.
头部凝视并提交Head-gaze and commit 我在视野中央看到一个光标。I see a cursor in the center of my field of view. 在光标置于某些对象上时,其状态会发生改变。The cursor changes state when it's over certain objects. 当我执行动作时,我会看到/听到视觉和听觉确认。I see/hear visual and audible confirmations when I take action.
解放双手(头部凝视和停留)Hands-free (Head-gaze and dwell) 我在视野中央看到一个光标。I see a cursor in the center of my field of view. 当我停留在一个可交互对象上时,我会看到一个进度指示器。I see a progress indicator when I dwell on an interactable object. 当我执行动作时,我会看到/听到视觉和听觉确认。I see/hear visual and audible confirmations when I take action.
(语音命令)Hands-free (Voice commanding) 我看到一个侦听指示器和字幕,显示系统听到的内容。I see a listening indicator and captions that show what the system heard. 我获得了语音提示。I get voice prompts and hints. 当我说:“我可以说什么?”When I say: "What can I say?" 我看到了反馈。I see feedback. 当我发出命令时,我看到/听到视觉和听觉确认,或者在需要时得到消除歧义 UX。I see/hear visual and audible confirmations when I give a command, or get disambiguation UX when needed.

以下问题可帮助团队选择交互模型:Below are questions that we've found help teams select an interaction model:

  1. 问:用户是否想要触控全息影像并执行精确的全息操作?Q: Do my users want to touch holograms and perform precision holographic manipulations?

    答:如果是,请查看手部和运动控制器交互模型,以便进行精确定位和操作。A: If so, check out the Hands and motion controllers interaction model for precision targeting and manipulation.

  2. 问:用户是否需要因真实世界的任务而保持双手空闲?Q: Do my users need to keep their hands free for real-world tasks?

    答:如果是,请查看解放双手交互模型,该模型通过基于凝视和语音的交互提供出色的解放双手体验。A: If so, take a look at the Hands-free interaction model, which provides a great hands-free experience through gaze and voice-based interactions.

  3. 问:用户是否有时间学习 MR 应用程序的交互,或者他们是否需要尽可能轻松的掌握交互知识?Q: Do my users have time to learn interactions for my MR application or do they need the interactions with the lowest learning curve possible?

    答:建议使用手部和运动控制器模型,以便用户尽可能轻松地掌握相关知识并进行最直观的交互(只要用户能够用双手进行交互)。A: For the lowest learning curve and most intuitive interactions, we recommend the Hands and motion controllers model, as long as users can use their hands for interaction.

  4. 问:用户是否使用运动控制器进行指向和操作?Q: Do my users use motion controllers for pointing and manipulation?

    答:手部和运动控制器模型包含了有关使用运动控制器实现出色体验的所有指导。A: The Hands and motion controllers model includes all guidance for a great experience with motion controllers.

  5. 问:用户是否使用辅助功能控制器或常用蓝牙控制器,例如遥控器?Q: Do my users use an accessibility controller or a common Bluetooth controller, such as a clicker?

    答:建议对所有非跟踪控制器使用头部凝视和提交模型。A: We recommend the Head-gaze and commit model for all non-tracked controllers. 它可让用户使用简单的“目标和提交”机制遍历全部体验。It's designed to allow a user to traverse an entire experience with a simple "target and commit" mechanism.

  6. 问:用户是否仅通过“点击”来进行逐步体验(例如在类似 3D 幻灯片的环境中),而不是导航密集的 UI 控件布局?Q: Do my users only progress through an experience by "clicking through" (for example in a 3D slideshow-like environment), as opposed to navigating dense layouts of UI controls?

    答:如果用户不需要控制大量 UI,则其可学习使用头部凝视和提交,无需担心如何进行定位。A: If users don't need to control a lot of UI, Head-gaze and commit offers a learnable option where users don't have to worry about targeting.

  7. 问:用户是否同时使用 HoloLens(第 1 代)和 HoloLens 2/Windows Mixed Reality 沉浸式头戴显示设备 (VR)?Q: Do my users use both HoloLens (1st gen) and HoloLens 2/Windows Mixed Reality immersive headsets (VR)?

    答:由于头部凝视和提交是适用于 HoloLens(第 1 代)的交互模型,因此建议支持 HoloLens(第 1 代)的创意者对用户在 HoloLens(第 1 代)头戴显示设备上使用的任何功能或模式使用头部凝视和提交。A: Since Head-gaze and commit are the interaction model for HoloLens (1st gen), we recommend that creators who support HoloLens (1st gen) use Head-gaze and commit for any features or modes that users will experience on a HoloLens (1st gen) headset. 有关为多代 HoloLens 打造出色体验的详细信息,请参阅下一部分“转换交互模型”。See the next section on Transitioning interaction models for details on making a great experience for multiple HoloLens generations.

  8. 问:与经常在单个空间中工作的用户相比,时常移动的用户(覆盖大空间或在各空间之间移动)可使用哪些交互模型?Q: What about users who are mobile, covering a large space or moving between spaces, versus users who tend to work in a single space?

    答:任何交互模型都适用于这些用户。A: Any of the interaction models will work for these users.

备注

即将推出有关应用设计的更多指南。More guidance specific to app design coming soon.

转换交互模型Transitioning interaction models

还有一些用例可能要求利用多个交互模型。There are also use cases that might require using more than one interaction model. 例如,应用程序的创建流程使用“手部和运动控制器”交互模型,但你想要为现场技术人员使用解放双手模式。For example, your application's creation flow uses the "hands and motion controllers" interaction model, but you want to employ a hands-free mode for field technicians. 如果你的体验确实需要多个交互模型,最终用户在从一个模型转换到另一个模型时可能会遇到困难,特别是在刚接触混合现实时。If your experience does require multiple interaction models, users might have difficulty transitioning from one model to another, especially when they're new to mixed reality.

备注

我们正在持续为开发人员和设计人员编写更多的指南,以告知他们如何、何时以及为何使用多个 MR 交互模型。We're constantly working on more guidance that will be available to developers and designers, informing them about the how, when, and why for using multiple MR interaction models.

另请参阅See also