Imagine we have a mobile device with hardware and software that can capture tilt (pitch, roll, yaw) and translation (x, y, z) motion. An accelerometer can more readily find tilt but a camera can find tilt and translation.
If hardware and software can leave this information in the format of a TiltML (Tilt Mark-up Language – XML based) then front-end software such as Flash can make use of the information as interface.
A simple example would be to look down on the mobile device and tilt the device to roll a ball through a maze.
Here is an example Video: Tilt for Mobile Devices Video
A device can be looked at in either horizontal or vertical orientations. But there are a number of different ways these two orientations can be viewed. We will call these ways, modes. Here are six modes of interface:
MODE 1: Top View Tilt
Looking down or up at the device and tilting in pitch and roll. Rolling a ball through a maze.
MODE 2: Front View Tilt
Holding the device in front of you vertically and steering with yaw. A racing game where the device is like a steering wheel.
MODE 3: Top View Translation one Axis
Looking down and using one translation (y axis) as motion and tilt to steer. Walking or running creates motion in the game, pitch and roll or yaw to steer.
MODE 4: Front View Translation one Axis
Looking forward and moving forward causes forward motion in game (z axis). Walking, running or driving creates motion in game and then steering with yaw.
MODE 5: Top View Translation all Axes
Looking down and mapping out two or three dimensional space with translation and yaw. Finding virtual items in real space looking down (or up)
MODE 6: Front View Translation all Axes
Looking forward and mapping translation z plus pitch or roll to map real space. Avoiding obstacles or capturing items as you walk
Combining these modes leads to Full Space where the device acts as window to alternate virtual space in full 3D. Putting these devices in front of your eyes is a form of mediated reality where you can diminish or augment reality.