Week 5: 16 - 23 March [Continued AR Prototype, Atlantic crossing]
Moving on from use of the Kinect v1, I returned to my
earlier prototypes made using Vuforia.
To add to this design, a scene following the Atlantic
crossing will be triggered when the ship reaches the USA – as my research
identified an event during the landing with potential to be converted to an
interactive scene. By using Unity scene management, a new scene can be loaded
and new ARcam instantiated to recognise the same database image target, but
trigger the instantiation of different assets.
I then tried editing the ARcam behaviour script, limiting
it to trigger only the first set of assets that are loaded. While working on
this approach and browsing the Vuforia ARcam documentation, I learned that any
ARcam with the same licence key will reference the same image targets. I then edited
the licence key of the camera in following scene, which then allowed loading of
separate assets from the same image as intended.
Of the prototypes created so far, testers seemed to
respond most to the virtual buttons and being able to directly interact with
them.
I began designing scene ideas which would allow the user
to interact with buttons, while also displaying information regarding the
airship to the user.
Using the R34’s maiden voyage across the Atlantic as the
basis for the scene, I took the opportunity to use software new to me and
produced low detail models of the Western Europe and USA, using blender.
Not having experience in modelling, creating the models
did produce a few minor issues. The most significant of which was the model
appearing to be transparent in Unity when viewed from certain angles. After
consulting forums, I found that a portion of the models normal were inverted –
this was rectified in blender and reimported into Unity correctly.
I applied the simple shader made as part of my earlier
prototypes to improve the visual quality of the scene.
This required even further research into the history of
the airship – identifying the most important points from the transatlantic
journey and including them within the scene.
I added markers between the continents to signify the
location at which each event occurred and as an indication for the route the
user should steer. To indicate when a marker had been reached I produced a particle
system within unity to highlight the area.
When the default Unity camera was replaced with the
Vuforia ARcam all UI elements began behaving erratically on playing the scene. Swapping
back to a default Unity camera resolved this issue – meaning that the ARcam or
components it relies on were to blame.
Elements would appear to be non-functional or produce a
different outcome within the Unity editor in both scene and game view. The
issue was made worse by unpredictable outcomes as the size, scale, delay or
failure of the particle systems would not occur consistently. Each would be affected
differently each time the database trigger image was detected, making
identifying the problem a lengthier process.
The database image itself was also not the issue. I
tested multiple 5-star rated images (maximum rating for reliability of
detection), all shared the issues.
I ruled out the device camera being the cause for this by
testing the scene with three webcams of various qualities.
I re-wrote my scripts to ensure there were no issues that
had gone undetected by visual studio. After ensuring the code was functional, I
confirmed that each effect was being triggered by substituting the particle
systems for rotating the markers and printing strings to the console.
Next, I consulted the Vuforia documentation and developer
forums, finding examples of other users with similar issues, with Vuforia
developers advising this was a problem that was being addressed in future
updates. However, I found one comment advising that the size of the effect
being randomised was an issue with scale.
Assessing my own project with this advice, I paused the
scene in the Unity editor once the database image target had been recognised by
the webcam and found that when the particle systems were not appearing, they
were being triggered, but their scale was being set relative to their parent
object and not to world space – meaning they were playing inside their parent,
not visible to the user.
Once known, I altered the particle systems scaling mode
to scale with hierarchy, so that each will
sets their size and distribution to be identical inside their parent and
database image target and out of it.
After testing within Unity and on the mobile device
multiple times, this resolved both scale and positioning issues.
To display the information when a marker was reached I
initially used a Unity canvas to display text, using a world space canvas to
keep the text relative to the trigger image and a script to rotate the text to
always face the ARcam position.
After testing the scene, I learned that canvas text cannot
be directly instantiated on recognition of the database trigger image, as the
rest of the game objects are. I decided to substitute the canvas text element
for 3Dtext game objects in Unity. While the canvas text could have been
triggered through use of a script, 3Dtext allows for more straightforward
positional editing in Unity and removing the script which had made the element
face the ARcam amplified the virtual 3D effect.
When displayed in scene the text elements were pixelated
and of very low quality. I was made aware of 3Dtext from the Vuforia developer
forums, having never used it before.
I found the solution to this was to set the default text
size to a much larger figure, then amend the character size proportionally down
to give text the same size as initially intended, but with a far sharper font.
I noticed that when trying to replace the virtual button materials
with images to indicate their purpose, users were confused when they could no
longer see their hands. I found that this is because Vuforia renders the children
of the most recently detected database image a layer higher than any model
already loaded (nothing of the real world is rendered ahead of any rendered
object.
I researched methods of hiding objects when the user
moves their hands ‘through’ a model. After reviewing methods of detecting the
hand from a mobile device and Vuforia forums, it is clear it is not possible on
a single mobile device. Multiple cameras would be required to provide
positional data to an advanced shader.
Instead, I will reduce the alpha value of the virtual
button images, so users can still see their hand relative to the controls.
Testing shows this functions, but initially I had issues
loading different assets from a database image even when using different ARcams.
Both models would attempt to load randomly resulting in either one of two
scenes loading, or both, followed shortly by the editor crashing.
I attempted to prevent this error by editing the ARcam
prefab, limiting its simultaneous detection to a single database image at any
one time. Unfortunately this did not solve the problem as the single image was
set to load assets for both scenes.
Later, I read through the events of the R34 logbook, selecting relevant information to be displayed in the application throughout the voyage:
Later, I read through the events of the R34 logbook, selecting relevant information to be displayed in the application throughout the voyage:
To try and maintain a sense of continuity I also set the
airship model to not destroy on load, so it will appear to make the journey,
then land smoothly.
Now that this scene is in place, next week I will
continue research into working with object permanence and moving objects
relative to world space.
Comments
Post a Comment