Week 4, Part 2: 9 - 16 March [Understanding Kinect practical capabilities]


Using the prebuilt applications included in the development kit v1.8 I could test the accuracy of the Kinect.

While both reasonably reliable, facial tracking and voice input will be impractical for this project. Facial tracking is very restricted in terms of suitable content, and with the potential for many people to attend the event, there is a high chance voice input will be hindered by nearby speech.


Head tracking and facial recognition
Voice input

While overall the device is responsive, fast movements often cause the Kinect to lose joints, so the user must stand in the calibration pose until redetected.

If the Kinect is positioned poorly, with obstacles blocking its view of the user, when the majority of a limb isn’t tracked it makes detection of the remainder of joints unreliable and often requires recalibration.

There are some gestures where limbs pass behind/in front of each other that if executed too slowly achieve blocking like the above, occasionally resulting in the same issue, whereby failed tracking of joints can produce unintended results – although if tracking is lost briefly in this way, recalibration is rarely required with full skeletal tracking being accurately recovered within seconds of all joints becoming uninterrupted. I will consider the necessary gestures the user will need to use as input to avoid this as much as possible.

Lost limb tracking
False detection with single user
Another issue where occasionally the sensor will detect additional false sets of joints, when there is only one user (and no other person in the vicinity). I found this occurred very infrequently, but should it happen during the exhibition application, theres potential for it to interfere with the user experience.

The sensors can be calibrated for users in a seated or standing stance, which may be of help when it comes to the exhibition as we are likely to have potential users of all ages. Though if the user is seated in a high-back chair the sensor can often struggle to clearly differentiate between the user and the chair, again requiring the user to adopt the calibration pose.

The sensor has a maximum range, but also found that if users are too close to the Kinect tracking is lost. The documentation confirmed that the Kinect v1 has an optimal tracking area of 6m2 in front of the Kinect, though this begins from approximately 90cm from the sensor. The positioning of the sensor will be something to consider when creating applications for the exhibition.

Gesture input
Skeletal tracking

Depth tracking










The most appropriate applications will focus on skeletal tracking (primarily gestures) and depth tracking (point cloud). Both offer a range of potential implementations, suitable for the exhibition.

Comments

Popular posts from this blog

Week 3: 2 - 9 March [Continued AR prototype]

Week 1: 16 - 23 Feb. [Meeting with Museum Director]

Week 4, Part 1: 9 - 16 March [Understanding Kinect functionality]