Week 4, Part 1: 9 - 16 March [Understanding Kinect functionality]

We decided early on in the week that the three of us working on the R34 centenary project should separate, so that we can explore a range of methods to produce interactive experiences.

Of the ideas presented to Basil the three that he was most responsive to were mobile augmented reality, Xbox Kinect and Oculus Rift VR as they each offer simultaneously accessible experience to many users, or deep and meaningful experience for solo users. Henry will continue work into augmented reality, Elliot into VR and I will develop a desktop application using the Kinect.

I chose to work with the Kinect v1 (Xbox 360), rather than the Kinect v2 (Xbox One). The latter offers significant improvement over v1, but is itself more expensive and also requires a highly priced adapter to make it USB compatible – making it less suitable for the museum.

While I am familiar with the top-level functionality of the Kinect, I have very little understanding of how to use the peripheral to achieve my own goals, so began reading into how the Kinect obtains and processes user data.

Built into the bar is an RGB camera, a multi-array microphone, and a depth sensor.

The depth sensor consists of an infrared projector and a sensor. The projector projects a continuous infrared speckle pattern over its field of vision. The Kinect sensors use the specific pattern distribution to identify the angle at which the point sits from the sensor. Once the angle is known, a simple trigonometry equation can be used to calculate the distance the speckles are from the camera.

This means that the Kinect may suffer performance issues in brightly lit areas or outside where the speckle pattern would be washed out, and only one would be able to be used at a time as the overlapping patterns would cause angle detection issues.

The Kinect is also only able to detect 6 people simultaneously, but only track a maximum of 2 skeletons simultaneously.

Both of the above are potential issues with the airship project, as Basil has advised the exhibition will likely take place in an open hangar (natural light may interfere), and the main draw of the Kinect was to involve a larger number of users at any one time.

While the former may be solved simply by repositioning where the equipment is set up, the latter may possible to be overcome through choice of implementation.


After doing this introductory reading to learn how the equipment functions, I moved onto installing the device ready for use.

The latest version Kinect v1 is compatible with is SDK v1.8. After installing the device was still unrecognised, and I had to manually search for the 2013 drivers.

Since the Kinect v1 was officially discontinued in 2014, with the Kinect for windows and Kinect v2 releasing in 2012 and 2013 respectively, resources and compatible packages for the Kinect v1 are comparably difficult to find. This has already been the cause of several time consuming obstacles.

After a few hours, all resulting incompatibility issues were resolved. The next stage will be to experimenting with the premade introduction applications from Windows, to get a better understanding of the Kinect's practical capabilities.

Comments

Popular posts from this blog

Week 3: 2 - 9 March [Continued AR prototype]

Week 1: 16 - 23 Feb. [Meeting with Museum Director]