question

JAMESMORGENSTERN-0766 avatar image
0 Votes"
JAMESMORGENSTERN-0766 asked QuentinMiller-3866 answered

About Image Rectification

I am using Kinect DK for a robotic bin-picking task. As such, I have mounted the Kinect so that it is looking straight downward at a flat table top. I have mechanically aligned and verified that the Kinect XY plane is parallel to the tabletop -- to within a degree or so. I imaged the table top with the depth sensor (NFOV attached)!

55429-backnfov

and taking an excerpt from the middle of the depth image I get the image I shared below !

55493-backroi

; because the Kinect and the table top are parallel the Kinect -- since the depth image reports the distance from the XY plane to the table top parallel to the Z axis of the Kinect -- the image should be uniform. But it is not. There is a decided change in the depth correlated with changes in Y; I have extracted a plot of the profile of depth values roughly parallel to the Y axis which is in the image below!

55449-backprofile300

A crude calculation shows that the slope of the profile is roughly 11.5 or 12 degrees which is not even close to being flat as should be expected. BUT the Kinect documentation does point out that the range sensor is rotated by 6 degrees about the X axis. So it seems to me that the processing carried out to convert the range sensor data into a rectified depth image would include a rotation about the X axis in order to bring the depth image into conformance with the Kinect coordinate system; and it looks to me like instead of rotating 6 degrees properly the Microsoft processing is rotating 6 degrees in the wrong direction thus creating a total rotation of 12 degrees as I have measured.

My question then is this: Is there in fact an error in rotation in the processing of the range data into the depth image ?




azure-kinect-dk
backroi.jpg (55.2 KiB)
backnfov.jpg (50.2 KiB)
backprofile300.jpg (21.0 KiB)
· 7
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

I have the latest version SDK ... this response is unhelpful

1 Vote 1 ·
asergaz avatar image asergaz JAMESMORGENSTERN-0766 ·

@JAMESMORGENSTERN-0766 thank you for letting us know. I am leveraging Product Group Team to address your current question.

"
it looks to me like instead of rotating 6 degrees properly the Microsoft processing is rotating 6 degrees in the wrong direction thus creating a total rotation of 12 degrees as I have measured.
My question then is this: Is there in fact an error in rotation in the processing of the range data into the depth image ?
"

Thank you for your time so far.

0 Votes 0 ·

@JAMESMORGENSTERN-0766 thanks for the great question we are looking into it. If you asked the same question elsewhere please share here as well. As well as the docs you have looked so far so I can get the right context and we don't point you to places that you looked already ;).

Thanks!

0 Votes 0 ·

well, thanks, I guess. I dont know what to do with this link... I dont do Git and I dont do makefiles or visual studio. can you not supply a link for the .exe itself?

1 Vote 1 ·

Hello @JAMESMORGENSTERN-0766 ,
Please share with us if you have any other questions related with your original post. Otherwise could you go ahead and mark the below as answer so community members can benefit from a richer content? :)

Thank you so much.

Remember:
- Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.

0 Votes 0 ·
asergaz avatar image
0 Votes"
asergaz answered JAMESMORGENSTERN-0766 commented

@JAMESMORGENSTERN-0766 ,
Here is the response I received from Product Team:

1. The coordinate system of Azure Kinect depth and camera described in the doc is accurate “The depth camera is tilted 6 degrees downwards of the color camera”
2. Based on that and according to your explanation for your setup, the Z-depth measurements near the table front should be smaller than the rest of the table. This is illustrated in the image that you shared “55429-backnfov.jpg” where the table front looks darker than the rest of the table
3. Generally I would not trust the accuracy of the manual alignment of the camera wrt the table. The right way to do this is to estimate the table pose in camera coordinate system using the pose estimation functions in OpenCV

Thank you so much for your time and let me know if you have further questions?

Remember:
- Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
- Want a reminder to come back and check responses? Here is how to subscribe to a notification.





· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.


Re the downward angle of the Depth Camera: My reading of the documentation is that while the Depth camera is physically pointed down that nevertheless the SDK presents the depth camera in the same plane as the RGB. Can you please provide definitive answer as to whether or not the Depth Image plane is parallel to the RGB Image plane -- that is, in the conversion of range data into depth image, is the depth image rotated so as to be parallel to the RGB image ?

0 Votes 0 ·
QuentinMiller-3866 avatar image
0 Votes"
QuentinMiller-3866 answered

@JAMESMORGENSTERN-0766 assuming you use the k4a transformation functions you can consider the two cameras to be on the same plane.

5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.