Mapping of Kinect Camera depth pixel array in WFOV view mode

David Robb 41 Reputation points
2020-07-07T19:52:22.217+00:00

We have an application where we operate the Azure kinect cameras in WFOV depth mode. This has a view that projects as an asymmetrical cone as shown in

https://learn.microsoft.com/en-gb/azure/Kinect-dk/hardware-specification

This got us thinking how this WFOV circular pattern maps onto the 512x512 square array of depth points. Looking at the generated calibration table in the example code, it appears that this circle fits inside the square with the non-overlapping pixels discarded as invalid. It also appears that the distribution is linear in X and Y.

However, as the cone generated from the WFOV mode camera extends further down than up. So even though the 512x512 pixels are linearly distributed across the circle, does this make the pixels in the upper half intrinsically more accurate and less noisy than the lower half? The 'light' in the upper half has travelled less distance and at a less steep angle.

We have an application where we are more interested in the lower half of the picture so may consider mounting the camera upsidedown.

Azure Kinect DK
Azure Kinect DK
A Microsoft developer kit and peripheral device with advanced artificial intelligence sensors for sophisticated computer vision and speech models.
288 questions
{count} votes

Accepted answer
  1. AshokPeddakotla-MSFT 28,316 Reputation points
    2020-07-08T10:52:41.07+00:00

    There are some additional Fresnel losses caused by the front window due to the higher angles of incidence at the bottom of the WFOV. We do not have precise quantification of the degree of loss. The temporal standard deviation should be lowest in the center of the FOV, even though it is slightly tilted downwards. We definitely expect there to be some top/bottom asymmetry due to the Fresnel losses and some left-right effects if the target is very close and there are issues due to the illumination-imaging baseline. The lens itself is tilted downwards slightly due to the entire module being tilted – the lens itself it centered on the imaging array for depth in the X-Y dimension. There are some additional radial effects due to lens RI and the illumination diffuser which is designed to partly compensate for the lens RI. Certainly for WFOV we expect some significant performance drop-off towards the very edges.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful