Week 3: 2 - 9 March [Continued AR prototype]
I have spoken with the university’s 3D technologist, who is
happy to assist with printing the airship model, but does not have the capacity
to do so for several weeks. Until I can print a model I will continue to
develop the prototype started last week.
Instantiating model versus instantiating model with shadow shader. |
To add an element of realism to the instantiated airship, I added a plane below the model so the light source within the scene would be blocked by the model, casting a shadow onto the plane. The shadow had very poor definition, not appearing to look even vaguely similar the model, and often went unnoticed against grey/white backgrounds.
To improve this, I added a shader to the airship model,
which casts a much more well-defined shadow. I also updated the components of
the plane within Unity, setting its alpha value to 0, making it transparent.
This still casts a shadow onto the planes surface, while making it appear to
the user that the shadow is cast directly on to the real-world surface.
Target video overlay. |
Considering further practical applications for the exhibition I referred to my notes from meeting with Basil, where he had explained videos of other groups projects may be shown at the event. After reading the Vuforia documentation, I learnt the process of including a video is straightforward – almost identical to that of instantiating a model using a database picture as a reference.
To further build on the initial prototype, I hoped to improve the user’s interaction with regard to how they can manipulate the game scene.
I removed the on-screen joystick I included previously and implemented ‘virtual buttons’. Vuforia includes basic button functionality where
regions can be designated and they then monitor whether the ‘features’ Vuforia
uses to detect the detect images are blocked from the device cameras view. If
they are, the button is activated.
By default, the buttons use a ‘preview material’ which is not visible when the application is running. To make the application more intuitive I replaced this with my own material which is persistent, indicating buttons to users. In future changes it may help clarify further if buttons are physically printed onto the database image in addition to this.
I noticed that in some settings the buttons performed
differently to others – this is due to the lighting as reflections on the
database image can interfere with feature detection.By default, the buttons use a ‘preview material’ which is not visible when the application is running. To make the application more intuitive I replaced this with my own material which is persistent, indicating buttons to users. In future changes it may help clarify further if buttons are physically printed onto the database image in addition to this.
I found that by lowering the threshold of the number of features needed to be hidden to trigger the button, the buttons can be used much more reliably in all environments.
Virtual buttons, manipulating model. |
As I mentioned in my last blog entry, I am excited about the possibilities that instantiating a persistent object, relative to world space could offer (ability to walk around the object, enter the object display information/effects as the user moves etc).
It would be more ideal to do this without a reference
image, so that multiple users could each virtually explore their own ship and would
not interfere with each other’s experience.
In my first attempt made use of Vuforia’s camera prefab. I
could instantiate the model without a reference marker. While this did allow
viewing of the virtual model against real world backdrop, the prefab
permanently locks the model relative to the camera meaning the user could never
change their perspective in relation to the ship.
I attempted taking the devices GPS coordinates and using these to compare the user position to the position of the object in scene, moving the object closer to/further from the camera based on the value. This implementation was inaccurate at best, the model often snapping to inappropriate positions based on generated values.
From testing and reviewing my script it seems this inconsistent behaviour is due to my script updating its position at too large an interval, or a device setting which is resulting in inconsistent position updates.
While continuing testing to identify any issues with the use of GPS, I learnt that
Vuforia offer ground plane detection, so users can instantiate objects relative
to world space at ground level or mid-air. This will automatically allow users
to change their perspective relative to the model’s position. This offers huge
improvements over my attempts as the model can be instantiated anywhere where a
surface can be detected, and will also account for height along the y-axis as
users change their distance, where GPS only allowed changes along the x-axis.
Vuforia currently doesn’t support this functionality on
my device, but will build a scene to test this on a supported device as soon
as possible.
Object permanence. |
Moving forward I will need to explore detection of 3D models to trigger further visuals, as well as to continue work on object permanence using Vuforia's plane detection – next time instantiating models relative to real world space.
Comments
Post a Comment