question

HimeAlt-5054 avatar image
1 Vote"
HimeAlt-5054 asked HimeAlt-5054 commented

[Speed Problem] How to get the color 2D coordinates(x,y) of each pixel of the Depth camera

Hi, I'm developing an 3D visualization app for Azure Kinect DK in C / C ++.

In case of Kinect v2, I can get the color 2D coordinates(x,y) of each pixel of the Depth camera using ICoordinateMapper-> MapDepthFrameToColorSpace() at high speed.
I used to draw a body mesh with color-camera-resolution image in real time.
(Refer 22sec-35sec : https://youtu.be/NERfvP4JwB0?t=22)

How can I get the same thing faster with the Azure Kinect DK?

My current code is below and it takes 50-60ms only in this part.


unsigned short depthBuffFromK4a = (unsigned short)k4a_image_get_buffer(image);
k4a_float2_t pixPosDin;
k4a_float2_t pixPosCout;
k4a_result_t apiResult;
int valid;
unsigned int buffPos = 0;
for (int y = 0; y < _bufferHeightD; ++y) {
for (int x = 0; x < _bufferWidthD; ++x) {
pixPosDin.xy.x = (float)x;
pixPosDin.xy.y = (float)y;
//2d_2d function
result = k4a_calibration_2d_to_2d(
&_calibration,
&pixPosDin,
static_cast<float>(depthBuffFromK4a[buffPos]),
K4A_CALIBRATION_TYPE_DEPTH,
K4A_CALIBRATION_TYPE_COLOR,
&pixPosCout,
&valid
);
if (result == K4A_RESULT_SUCCEEDED && valid == 1) {
//----description for valid value----//
}
else {
//----error description----//
}
++buffPos;
}
}



I also tried multithreading, but 25ms was the limit.

Can you tell me how to get the data at high speed?


Thank you for reading.

azure-kinect-dk
· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

@HimeAlt-5054 Community SME's on this topic or our team will review your scenario and circle back at the possible earliest time.

1 Vote 1 ·

1 Answer

SatishBoddu-MSFT avatar image
1 Vote"
SatishBoddu-MSFT answered HimeAlt-5054 commented

Hello @HimeAlt-5054 Below is the response from the Microsoft Product Team, I hope this will help with your initial query!


Please be advised to read the Azure Kinect Sensor SDK image transformations | Microsoft Docs. The goal of the transformation functions are fast GPU accelerated RGBD mapping and 2D depth image to 3D point cloud conversion. Also, be advised to look at the Azure Kinect Viewer source code which includes visualizing a 3D color point cloud.

Regarding Mesh, there is no mesh API in AKDK , if we are referring to just single view mesh, then we can try to compute the mesh with some off-the-shelf algorithm that involves estimating surface normal, and faces from point cloud). If we are referring to Kinect fusion type of mesh reconstruction with camera moving, then AKDK does not include the Kinect fusion API (however there is an example in the sample repo for Kinfu using opencv).


Please comment in the below section if you have any comments or suggestions\feedbacks.
If the response is helpful, please click "Accept Answer" and upvote it.


· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Thank you for answering the question @SatishBoddu-MSFT and Microsoft Product Team. This answer made it clear that I had to implement it myself using calibration data and GPU programming.
I accept this answer. Thank you very much.

1 Vote 1 ·