Kinect Hand Tracking

I've implemented hand gesture tracking by using Kinect v2 and retro-reflective materials.
In particular, the program tracks retro-reflective marker to get the position of a hand, and detects hand shape to get the bending degree of a hand.

Thanks to NtKinect: Kinect V2 C++ Programming with OpenCV on Windows10 [3], I could easily use kinect with opencv in C++.

I didn't use body tracking of Kinect for Windows SDK 2.0 so if you are interested in hand tracking method by using depth, infrared, rgb sensors, try my code.

Code explanations:
cvtColor(kinect.rgbImage, ycc_frame, CV_BGR2YCrCb);
inRange(ycc_frame, Scalar(0, 133, 77), Scalar(255, 173, 127), hand_bwImg);
In paper [1], they said that using the YCrCb color space is more convenient to detect skin color so I used cvtColor function to change the color space. With this color space, we can detect skin color by using following range.
$$133 < Cr < 173 \\ 77 < Cb < 127
$$By using inRange function, we can get black/white image(skin color: white, background: black).

getBendDeg function calculates bending degree of a hand. I refered Detect function from [2].
//Here, hand should be white in hand_bwImg.
double getBendDeg(Mat hand_bwImg, Mat rgbImg)
This function calculates bending degree of a hand by:
1. find contours in black/white image.
2. choose the contour that has max area.
3. approximate the contour to polygon.
4. find convex hull.
5. find convexity defects.
6. take average value of depth of convexity defects.

If you don't know about convex hull and convexity defects well, see the following figure.

Next, infrared sensor can detect retro-reflective materials. They are shown by white color in the infrared frame. Thus, we can make black/white image by using inRange function. (retro-reflective marker: white, background: black)
inRange(kinect.infraredImage, Scalar(65530), Scalar(65535), marker_bwImg);
markerPos function get depth and position of a marker.
//Here, marker should be white in bwImg.
void markerPos(Mat bwImg, Mat depthImg, DepthSpacePoint &center, UINT16 &depth)
This function get depth and position of a marker by:
1. find bounding rectangle of a marker.
2. calculate the center of a bounding rectangle.
3. calculate average depth around the marker.

The figure below shows the mechanism well.
Finally, I converted a depth space point to a camera space point by using coordinateMapper function in NtKinect.
kinect.coordinateMapper->MapDepthPointToCameraSpace(dp, d, &handSp);
Source code: https://github.com/sehkmg/KinectHandTracking

References:
[1] http://www.wseas.us/e-library/conferences/2011/Mexico/CEMATH/CEMATH-20.pdf
[2] http://anikettatipamula.blogspot.kr/2012/02/hand-gesture-using-opencv.html
[3] http://nw.tsuda.ac.jp/lec/kinect2/index-en.html

댓글