How can we take advantage of both screenspace and world space to present, organize and manipulate information.
Digital screenspace UI largely is constrained to flat layers. Conversely, most Augmented Reality elements float isolated within world space. I wanted to look into ways to take advantage of both. You can read more about this in another post.
To illustrate this, I came up with a ‘push to world’ UI interaction using information that benefited from being shown in 3D, in this case live earthquake data from the USGS. Being able to see certain types of data in 3D gives us a better sense of scale and proprtion. Siesmic waves are measured in a logarithmic scale, so visualizing information spatially can give us a better sense for relationships.
This required some primitive proceedural mesh generation and world / screen space management, which I hacked together.
You can especially get a sense of scale at higher magnitudes.
In hindsight moving towards the camera is too intense and a ‘receed only’ interaction would be much less jarring. Having other shortcuts to rotate the visualization in world space would let people be more economical with their physical movements.
This project led to other ideas currently in progress.