Interaction models

Completed

The Mixed Reality Toolkit enables you to consume inputs from various input sources such as 6DoF (six degrees of freedom) controllers, articulated hands, or speech. To determine the best interaction model for your app, think about your users' goals and consider any environmental factors that might impact their experience.

There are three primary interaction models that suit the majority of mixed reality experiences. Although you could combine parts of various interaction models into your app, think carefully before doing so. It creates the risk of competing user input, such as simultaneous hand rays and a head-gaze cursor, and this might overwhelm and confuse users.

Hands and motion controllers model

The hands and motion controllers model requires users to use one or two hands to interact with the holographic world. This model removes the boundary between the virtual and the physical.

Some specific scenarios include:

  • Providing information workers 2D virtual screens with UI affordances to display and control the content
  • Providing Firstline Workers tutorials and guides for factory assembly lines
  • Developing professional tools for assisting and educating medical professionals
  • Using 3D virtual objects to decorate the real world or to create another world
  • Creating location-based services and games using the natural world as a background

There are three hands and motion controllers modalities:

  • Direct manipulation with hands
  • Point and commit with hands
  • Motion controllers

Hands-free model

As the name implies, the hands-free model enables users to interact with holographic content without using their hands. Instead, they can use voice input or "gaze and dwell."

Some specific scenarios include:

  • Being guided through a task while the user's hands are busy
  • Referencing materials while the user's hands are busy
  • Hand fatigue
  • Gloves that can't be tracked
  • Carrying something in their hands
  • Social awkwardness to make large hand gestures
  • Tight spaces

Gaze and commit

It's best to use gaze and commit when interacting with holographic content that's out of reach. The user gazes at an object or UI element, and then selects it ("commits" to it) using a secondary input. Commit methods include voice commands, a button press, or a hand gesture.

There are two types of gaze input (head-gaze and eye-gaze) and they have different commit actions.