DirectX 中的列印頭和眼睛輸入Head-gaze and eye-gaze input in DirectX

注意

本文與舊版 WinRT 原生 Api 相關。This article relates to the legacy WinRT native APIs. 針對新的原生應用程式專案,建議使用 OPENXR APIFor new native app projects, we recommend using the OpenXR API.

在 Windows Mixed Reality 中,眼睛和前端的輸入是用來判斷使用者所查看的內容。In Windows Mixed Reality, eye and head gaze input is used to determine what the user is looking at. 您可以使用資料來驅動主要的輸入模型(例如 前端和認可),並提供不同互動類型的內容。You can use the data to drive primary input models like head-gaze and commit, and provide context for different interaction types. 有兩種類型的注視向量可透過 API 取得:標眼和眼睛。There are two types of gaze vectors available through the API: head-gaze and eye-gaze. 兩者都是以具有原點和方向的立體光線形式提供。Both are provided as a three-dimensional ray with an origin and direction. 然後,應用程式可以 raycast 到其幕後或真實世界,判斷使用者的目標。Applications can then raycast into their scenes, or the real world, and determine what the user is targeting.

前端看著 表示使用者的標頭指向的方向。Head-gaze represents the direction that the user's head is pointed in. 將前端視為裝置本身的位置和轉寄方向,並將位置作為兩個顯示器之間的中心點。Think of head-gaze as the position and forward direction of the device itself, with the position as the center point between the two displays. 所有混合現實裝置都可使用前端。Head-gaze is available on all Mixed Reality devices.

眼睛 表示使用者的眼睛朝的方向。Eye-gaze represents the direction that the user's eyes are looking towards. 原點位於使用者的眼睛之間。The origin is located between the user's eyes. 它可在包含眼睛追蹤系統的混合現實裝置上使用。It's available on Mixed Reality devices that include an eye tracking system.

前端和眼睛光線都可透過 SpatialPointerPose API 存取。Both head and eye-gaze rays are accessible through the SpatialPointerPose API. 呼叫 SpatialPointerPose:: TryGetAtTimestamp ,以在指定的時間戳記和 座標系統接收新的 SpatialPointerPose 物件。Call SpatialPointerPose::TryGetAtTimestamp to receive a new SpatialPointerPose object at the specified timestamp and coordinate system. 此 SpatialPointerPose 包含頭部的原點和方向。This SpatialPointerPose contains a head-gaze origin and direction. 如果有眼睛追蹤,它也會包含眼睛的原點和方向。It also contains an eye-gaze origin and direction if eye tracking is available.

裝置支援Device support

功能Feature HoloLens (第 1 代)HoloLens (1st gen) HoloLens 2HoloLens 2 沉浸式頭戴裝置Immersive headsets
頭部注視Head-gaze ✔️✔️ ✔️✔️ ✔️✔️
眼睛Eye-gaze ✔️✔️

使用頭部Using head-gaze

若要存取標頭,請從呼叫 SpatialPointerPose:: TryGetAtTimestamp 開始,以接收新的 SpatialPointerPose 物件。To access the head-gaze, start by calling SpatialPointerPose::TryGetAtTimestamp to receive a new SpatialPointerPose object. 傳遞下列參數。Pass the following parameters.

  • SpatialCoordinateSystem ,表示您想要用於頭部注視的座標系統。A SpatialCoordinateSystem that represents the coordinate system you want for the head-gaze. 這會以下列程式碼中的 coordinateSystem 變數表示。This is represented by the coordinateSystem variable in the following code. 如需詳細資訊,請造訪我們的 座標系統 開發人員指南。For more information, visit our coordinate systems developer guide.
  • 表示所要求之 head 姿勢確切時間的 時間戳記A Timestamp that represents the exact time of the head pose requested. 一般而言,您會使用對應至目前畫面格顯示時間的時間戳記。Typically, you'll use a timestamp that corresponds to the time when the current frame will be displayed. 您可以從 HolographicFramePrediction 物件(可透過目前的 HolographicFrame存取)取得這個預測的顯示時間戳記。You can get this predicted display timestamp from a HolographicFramePrediction object, which is accessible through the current HolographicFrame. 這個 HolographicFramePrediction 物件是由下列程式碼中的 預測 變數表示。This HolographicFramePrediction object is represented by the prediction variable in the following code.

當您有有效的 SpatialPointerPose 之後,就可以將標頭位置和轉寄方向作為屬性來存取。Once you have a valid SpatialPointerPose, the head position and forward direction are accessible as properties. 下列程式碼顯示如何存取它們。The following code shows how to access them.

using namespace winrt::Windows::UI::Input::Spatial;
using namespace winrt::Windows::Foundation::Numerics;

SpatialPointerPose pointerPose = SpatialPointerPose::TryGetAtTimestamp(coordinateSystem, prediction.Timestamp());
if (pointerPose)
{
   float3 headPosition = pointerPose.Head().Position();
   float3 headForwardDirection = pointerPose.Head().ForwardDirection();

   // Do something with the head-gaze
}

使用眼睛Using eye-gaze

為了讓您的使用者使用眼睛輸入,每位使用者在第一次使用裝置時,都必須經過 眼睛追蹤使用者校正For your users to use eye-gaze input, each user has to go through an eye tracking user calibration the first time they use the device. 眼睛類似的 API 類似于列印頭。The eye-gaze API is similar to head-gaze. 它會使用相同的 SpatialPointerPose API,以提供您可以針對場景 raycast 的光線來源和方向。It uses the same SpatialPointerPose API, which provides a ray origin and direction that you can raycast against your scene. 唯一的差別是您必須先明確啟用眼睛追蹤,再加以使用:The only difference is that you need to explicitly enable eye tracking before using it:

  1. 要求使用者在您的應用程式中使用眼睛追蹤的許可權。Request user permission to use eye tracking in your app.
  2. 啟用套件資訊清單中的「注視輸入」功能。Enable the "Gaze Input" capability in your package manifest.

要求存取眼睛輸入Requesting access to eye-gaze input

當您的應用程式啟動時,請呼叫 EyesPose:: RequestAccessAsync 來要求存取眼睛追蹤。When your app is starting up, call EyesPose::RequestAccessAsync to request access to eye tracking. 系統會視需要提示使用者,並在授與存取權之後傳回 GazeInputAccessStatus::「允許 」。The system will prompt the user if needed, and return GazeInputAccessStatus::Allowed once access has been granted. 這是非同步呼叫,因此需要一些額外的管理。This is an asynchronous call, so it requires a bit of extra management. 下列範例會啟動卸離的 std:: thread 以等候結果,而該結果會儲存至名為 m_isEyeTrackingEnabled 的成員變數。The following example spins up a detached std::thread to wait for the result, which it stores to a member variable called m_isEyeTrackingEnabled.

using namespace winrt::Windows::Perception::People;
using namespace winrt::Windows::UI::Input;

std::thread requestAccessThread([this]()
{
    auto status = EyesPose::RequestAccessAsync().get();

    if (status == GazeInputAccessStatus::Allowed)
        m_isEyeTrackingEnabled = true;
    else
        m_isEyeTrackingEnabled = false;
});

requestAccessThread.detach();

啟動卸離的執行緒只是一個處理非同步呼叫的選項。Starting a detached thread is just one option for handling async calls. 您也可以使用 c + + 所支援的新 co_await 功能/winrtYou could also use the new co_await functionality supported by C++/WinRT. 以下是要求使用者權限的另一個範例:Here's another example for asking for user permission:

  • EyesPose:: IsSupported ( # A1 可讓應用程式只有在有眼睛追蹤器時,才會觸發許可權對話方塊。EyesPose::IsSupported() allows the application to trigger the permission dialog only if there's an eye tracker.
  • GazeInputAccessStatus m_gazeInputAccessStatus;這是為了防止再次彈出許可權提示。GazeInputAccessStatus m_gazeInputAccessStatus; // This is to prevent popping up the permission prompt over and over again.
GazeInputAccessStatus m_gazeInputAccessStatus; // This is to prevent popping up the permission prompt over and over again.

// This will trigger to show the permission prompt to the user.
// Ask for access if there is a corresponding device and registry flag did not disable it.
if (Windows::Perception::People::EyesPose::IsSupported() &&
   (m_gazeInputAccessStatus == GazeInputAccessStatus::Unspecified))
{ 
    Concurrency::create_task(Windows::Perception::People::EyesPose::RequestAccessAsync()).then(
    [this](GazeInputAccessStatus status)
    {
        // GazeInputAccessStatus::{Allowed, DeniedBySystem, DeniedByUser, Unspecified}
            m_gazeInputAccessStatus = status;
        
        // Let's be sure to not ask again.
        if(status == GazeInputAccessStatus::Unspecified)
        {
                m_gazeInputAccessStatus = GazeInputAccessStatus::DeniedBySystem;    
        }
    });
}

宣告 注視輸入 功能Declaring the Gaze Input capability

按兩下 方案總管 中的 package.appxmanifest 檔案。Double-click the appxmanifest file in Solution Explorer. 然後流覽至 [ 功能 ] 區段,並檢查 [ 注視輸入 ] 功能。Then navigate to the Capabilities section and check the Gaze Input capability.

注視輸入功能

這會將下列幾行新增至 package.appxmanifest 檔案中的 封裝 區段:This adds the following lines to the Package section in the appxmanifest file:

  <Capabilities>
    <DeviceCapability Name="gazeInput" />
  </Capabilities>

取得眼睛光線Getting the eye-gaze ray

當您取得 ET 的存取權之後,就可以自由抓取每個畫面的眼睛光線。Once you have received access to ET, you're free to grab the eye-gaze ray every frame. 如同前端,請使用所需的時間戳記和座標系統來呼叫SpatialPointerPose:: TryGetAtTimestamp ,以取得SpatialPointerPoseAs with head-gaze, get the SpatialPointerPose by calling SpatialPointerPose::TryGetAtTimestamp with a desired timestamp and coordinate system. SpatialPointerPose 包含透過眼睛屬性的EyesPose物件。The SpatialPointerPose contains an EyesPose object through the Eyes property. 只有在已啟用眼睛追蹤的情況下,此值才會是 null。This is non-null only if eye tracking is enabled. 從該處,您可以藉由呼叫 EyesPose:: IsCalibrationValid,檢查裝置中的使用者是否有眼睛追蹤校正。From there, you can check if the user in the device has an eye tracking calibration by calling EyesPose::IsCalibrationValid. 接下來,使用 眼睛屬性來 取得包含眼睛的位置和方向的 SpatialRayNext, use the Gaze property to get the SpatialRay containing the eye-gaze position and direction. 注視屬性有時可以是 null,因此請務必檢查此值。The Gaze property can sometimes be null, so be sure to check for this. 發生這種情況的原因是,已校正的使用者暫時關閉其眼睛。This can happen is if a calibrated user temporarily closes their eyes.

下列程式碼顯示如何存取眼睛光線。The following code shows how to access the eye-gaze ray.

using namespace winrt::Windows::UI::Input::Spatial;
using namespace winrt::Windows::Foundation::Numerics;

SpatialPointerPose pointerPose = SpatialPointerPose::TryGetAtTimestamp(coordinateSystem, prediction.Timestamp());
if (pointerPose)
{
    if (pointerPose.Eyes() && pointerPose.Eyes().IsCalibrationValid())
    {
        if (pointerPose.Eyes().Gaze())
        {
            auto spatialRay = pointerPose.Eyes().Gaze().Value();
            float3 eyeGazeOrigin = spatialRay.Origin;
            float3 eyeGazeDirection = spatialRay.Direction;
            
            // Do something with the eye-gaze
        }
    }
}

當眼睛追蹤無法使用時回復Fallback when eye tracking isn't available

如同我們的 眼睛追蹤設計檔中所述,設計人員和開發人員都應該留意可能無法使用眼睛追蹤資料的情況。As mentioned in our eye tracking design docs, both designers and developers should be aware of instances where eye tracking data may not be available.

資料無法使用的原因有很多種:There are various reasons for data being unavailable:

  • 未進行校正的使用者A user not being calibrated
  • 使用者已拒絕應用程式存取其眼睛追蹤資料A user has denied the app access to his/her eye tracking data
  • 暫時干擾差異性,例如 HoloLens 面板上的塗抹,或遮蔽使用者眼睛的頭髮。Temporary interferences, such as smudges on the HoloLens visor or hair occluding the user's eyes.

雖然本檔中已提及部分 Api,但我們將在下列內容中提供一份摘要,說明如何偵測如何以快速參考的方式來使用眼睛追蹤:While some of the APIs have already been mentioned in this document, in the following, we provide a summary of how to detect that eye tracking is available as a quick reference:

您也可能想要在收到的眼睛追蹤資料更新之間加上超時時間,以檢查您的眼睛追蹤資料是否已過時,如以下所述,則改回前端。You may also want to check that your eye tracking data isn't stale by adding a timeout between received eye tracking data updates and otherwise fallback to head-gaze as discussed below.
如需詳細資訊,請流覽我們的回溯 設計考慮Visit our fallback design considerations for more information.


將注視與其他輸入相互關聯Correlating gaze with other inputs

有時您可能會發現您需要的 SpatialPointerPose 與過去的事件相對應。Sometimes you may find that you need a SpatialPointerPose that corresponds with an event in the past. 例如,如果使用者進行了點擊,您的應用程式可能會想要知道他們正在查看的內容。For example, if the user does an Air Tap, your app might want to know what they were looking at. 基於這個目的,只要將 SpatialPointerPose:: TryGetAtTimestamp 與預測的框架時間搭配使用,就會因為系統輸入處理和顯示時間之間的延遲而不正確。For this purpose, simply using SpatialPointerPose::TryGetAtTimestamp with the predicted frame time would be inaccurate because of the latency between system input processing and display time. 此外,如果針對目標使用眼睛,我們的眼睛通常會在完成認可動作之前繼續進行。Also, if using eye-gaze for targeting, our eyes tend to move on even before finishing a commit action. 這對簡單的分流來說比較不成問題,但在結合長聲音命令與快速的移動時變得更重要。This is less of an issue for a simple Air Tap, but becomes more critical when combining long voice commands with fast eye movements. 處理此案例的其中一種方式是使用對應至輸入事件的歷程時間戳記,對 SpatialPointerPose:: TryGetAtTimestamp進行額外的呼叫。One way to handle this scenario is to make an additional call to SpatialPointerPose::TryGetAtTimestamp, using a historical timestamp that corresponds to the input event.

不過,針對透過 SpatialInteractionManager 進行路由的輸入,有更簡單的方法。However, for input that routes through the SpatialInteractionManager, there's an easier method. SpatialInteractionSourceState有自己的TryGetAtTimestamp函數。The SpatialInteractionSourceState has its own TryGetAtTimestamp function. 呼叫以提供完全相關的 SpatialPointerPose ,而不需要猜測。Calling that will provide a perfectly correlated SpatialPointerPose without the guesswork. 如需有關使用 SpatialInteractionSourceStates 的詳細資訊,請參閱 DirectX 檔 中的手和移動控制器For more information on working with SpatialInteractionSourceStates, take a look at the Hands and Motion Controllers in DirectX documentation.


校正Calibration

為了讓眼睛追蹤能準確地運作,每位使用者都必須經過 眼睛追蹤使用者的校正For eye tracking to work accurately, each user is required to go through an eye tracking user calibration. 這可讓裝置為使用者調整系統,以獲得更舒適且更高品質的觀賞體驗,並同時確保一致的眼睛追蹤。This allows the device to adjust the system for a more comfortable and higher quality viewing experience for the user and to ensure accurate eye tracking at the same time. 開發人員不需要在其端進行任何動作,即可管理使用者校正。Developers don’t need to do anything on their end to manage user calibration. 系統會在下列情況下,確定使用者會收到校正裝置的提示:The system will ensure that the user gets prompted to calibrate the device under the following circumstances:

  • 使用者第一次使用裝置The user is using the device for the first time
  • 使用者先前退出宣告校正流程The user previously opted out of the calibration process
  • 上次使用者使用裝置時,校正程式未成功The calibration process didn't succeed the last time the user used the device

開發人員應務必為可能無法使用眼睛追蹤資料的使用者提供適當的支援。Developers should make sure to provide adequate support for users where eye tracking data may not be available. 深入瞭解 HoloLens 2 上的眼睛追蹤之回溯解決方案的考慮。Learn more about considerations for fallback solutions at Eye tracking on HoloLens 2.


請參閱See also