Week 4, Part 3: 9 - 16 March [Kinect Prototypes]
I began building basic prototypes to test functionality,
using the example scenes provided in the Windows example package as a starting
point to understand how the data is collected and used.
The package comes with documentation explaining the top-level
user interaction with the sensor and the example scenes provide functional
implementation examples.
Throughout development of each of my test scenes I did find several
incompatibility issues. A significant amount of commands, or features of Unity
used by the Kinect v1 are deprecated. In addition to this, Unity would often
crash on playing a scene. From reading archived forums I found that this is a common
problem in newer versions of Unity. After installing earlier Unity versions and
using forum consensus, I found Unity 3.2 to be the most reliable (will still occasionally
crash, but far more rarely). I was unable to trace the cause of the crash,
though it is likely a result of memory leak within the example project.
My first attempts were creating scenes where the user had
direct interaction with other game objects. I initially used joint tracking to
produce a basic skeleton, then moved onto use the depth sensor to give a point
cloud representation of the user. The point cloud not only had more accurate
interactions, but can also show a recognisable figure in game – which if used
at an exhibition may be more engaging than a universal skeleton.
The first scene is a simple game where the user must keep
the ship afloat. I included the model of the airship which I then gave a
rigidbody and collider components. If the user’s point cloud enters the
collider component, it has a force applied lifting it into the air with a
random degree of horizontal movement. If the ship’s y-axis position is too low,
the ship is destroyed and another spawned.
Depth tracking to interact with game objects. |
Gesture controls. |
Joint tracking to alter perspective. |
To exaggerate the effect, I added objects (with exclusively 90-degree
angles) extending along the z-axis, also adding a grid texture to the
background planes – both of which provide reference points so the user can how
much the camera moves relative to the scene.
I had only been testing the virtual 3D scene myself, and
recognised that if the camera was positioned higher/lower, or someone else
tested the scene the y-axis values were never within an appropriate range to
experience the full shift in perspective.
To overcome this, when the user is first detected by the
Kinect, the y-axis position of their head ‘joint’ is taken and made to equal 0,
relative to the cameras y-axis position. While this resolved the initial
problem, it does mean that users should enter the Kinect’s view slightly
crouched, so that they can fully stand to raise the view to see over the
airship.
During production of the three prototypes I also noted
instances where the Kinect would automatically adjust its viewing angle at
inappropriate times, pivoting up/down during play. From reading I know it is
possible to limit the Kinects allowed range of motion, so in future will restrict
this while playing.
Comments
Post a Comment