Posts

Showing posts from March, 2018

Week 6: 23 - 30 March [Plane Detection, GPS, User Movement]

Image
After having implemented a physically static scene, triggered upon recognition of a database image, to increase the applications interactive potential I began the week by continuing to research how to generate a 3D model relative to world space the user can physically walk around and interact with. This implementation would also allow multiple users to use the application independently, at the same time, without the need to crowd around a single database image or mass produce many images. While as a group we had played with the idea of adding image detection to leaflets, any images and virtual buttons used would need to be the maximum rating for recognition, at the same time as having a suitable layout for the museum’s needs. Both of which may interfere with existing plans of the museum. Ground Plane Detection Vuforia heavily advertise ground plane detection as one of their powerful assets. From the resources made available online I learnt the platform offers object perm

Week 5: 16 - 23 March [Continued AR Prototype, Atlantic crossing]

Image
Moving on from use of the Kinect v1, I returned to my earlier prototypes made using Vuforia. Of the prototypes created so far, testers seemed to respond most to the virtual buttons and being able to directly interact with them. I began designing scene ideas which would allow the user to interact with buttons, while also displaying information regarding the airship to the user. Using the R34’s maiden voyage across the Atlantic as the basis for the scene, I took the opportunity to use software new to me and produced low detail models of the Western Europe and USA, using blender. Not having experience in modelling, creating the models did produce a few minor issues. The most significant of which was the model appearing to be transparent in Unity when viewed from certain angles. After consulting forums, I found that a portion of the models normal were inverted – this was rectified in blender and reimported into Unity correctly. I applied the simple shader made as part o

Week 4, Part 3: 9 - 16 March [Kinect Prototypes]

Image
I began building basic prototypes to test functionality, using the example scenes provided in the Windows example package as a starting point to understand how the data is collected and used. The package comes with documentation explaining the top-level user interaction with the sensor and the example scenes provide functional implementation examples. Throughout development of each of my test scenes I did find several incompatibility issues. A significant amount of commands, or features of Unity used by the Kinect v1 are deprecated. In addition to this, Unity would often crash on playing a scene. From reading archived forums I found that this is a common problem in newer versions of Unity. After installing earlier Unity versions and using forum consensus, I found Unity 3.2 to be the most reliable (will still occasionally crash, but far more rarely). I was unable to trace the cause of the crash, though it is likely a result of memory leak within the example project. My firs

Week 4, Part 2: 9 - 16 March [Understanding Kinect practical capabilities]

Image
Using the prebuilt applications included in the development kit v1.8 I could test the accuracy of the Kinect. While both reasonably reliable, facial tracking and voice input will be impractical for this project. Facial tracking is very restricted in terms of suitable content, and with the potential for many people to attend the event, there is a high chance voice input will be hindered by nearby speech. Head tracking and facial recognition Voice input While overall the device is responsive, fast movements often cause the Kinect to lose joints, so the user must stand in the calibration pose until redetected. If the Kinect is positioned poorly, with obstacles blocking its view of the user, when the majority of a limb isn’t tracked it makes detection of the remainder of joints unreliable and often requires recalibration. There are some gestures where limbs pass behind/in front of each other that if executed too slowly achieve blocking like the above, occa

Week 4, Part 1: 9 - 16 March [Understanding Kinect functionality]

We decided early on in the week that the three of us working on the R34 centenary project should separate, so that we can explore a range of methods to produce interactive experiences. Of the ideas presented to Basil the three that he was most responsive to were mobile augmented reality, Xbox Kinect and Oculus Rift VR as they each offer simultaneously accessible experience to many users, or deep and meaningful experience for solo users. Henry will continue work into augmented reality, Elliot into VR and I will develop a desktop application using the Kinect. I chose to work with the Kinect v1 (Xbox 360), rather than the Kinect v2 (Xbox One). The latter offers significant improvement over v1, but is itself more expensive and also requires a highly priced adapter to make it USB compatible – making it less suitable for the museum. While I am familiar with the top-level functionality of the Kinect, I have very little understanding of how to use the peripheral to achieve my own

Week 3: 2 - 9 March [Continued AR prototype]

Image
I have spoken with the university’s 3D technologist, who is happy to assist with printing the airship model, but does not have the capacity to do so for several weeks. Until I can print a model I will continue to develop the prototype started last week. Instantiating model versus instantiating model with shadow shader. To add an element of realism to the instantiated airship, I added a plane below the model so the light source within the scene would be blocked by the model, casting a shadow onto the plane. The shadow had very poor definition, not appearing to look even vaguely similar the model, and often went unnoticed against grey/white backgrounds. To improve this, I added a shader to the airship model, which casts a much more well-defined shadow. I also updated the components of the plane within Unity, setting its alpha value to 0, making it transparent. This still casts a shadow onto the planes surface, while making it appear to the user that the shadow is cast direc

Week 2, Part 2: 23 Feb. - 2 Mar. [AR Prototype]

Image
I began looking at the available AR platforms to identify the most appropriate. The most promising options were ‘ARKit’, ‘ARCore’, ‘Vuforia’ and ‘8th Wall’. ‘ARKit’ is only for iOS; ‘ARCore’ is only for Android. Both these options exclude too many potential users. ‘8th wall’ and ‘Vuforia’ are both cross platform and offer thorough documentation, however ‘8th wall’ doesn’t offer image tracking where ‘Vuforia’ does. ‘Vuforia’ is also officially integrated into unity 2017 and higher, making it my best option. Using Vuforia within Unity is an intuitive experience. Vuforia allows users to upload images, creating a database reference image which can then be imported into Unity. After activating a Vuforia license key, any Unity function can be triggered when the device camera detects an image identical to the one used for the database image. The first issue I found was that the model failed to load. After troubleshooting the problem and reading the Vuforia documentation, I foun