TECTON 3D

Virtual Reality Tabletop

Advertisements

The technological advances that have been witnessed in the last few years allowed the development of new and more interactive applications for all kinds of scenarios.  Multi-touch devices and depth sensors like Microsoft´s Kinect are clear examples of these advances, allowing both non-intrusive and inexpensive user tracking. As for visualization, common 3D displays allow users to perceive imagery as if it popped outside the screen. This motivates a fresh look at tabletop interfaces, towards better support of 3D direct manipulations in scenarios until recently conceivable in the realm of science fiction.

Exploiting the aforementioned technological solutions, we developed a tabletop prototype that consists on a semi-immersive environment based on a stereoscopic multi-touch surface combined with a Microsoft Kinect depth camera. This camera tracks the user’s head, enabling a real-time personalized 3D perspective view of the contents shown on the table. While a user moves around the table, the perspective view of the building changes according to his or her movement. A 3D television set combined with a 3D active shutter glasses enables stereoscopic visualization, sending 1920×1080 pixel resolution images to each eye 60 times per second. This setup allows rendering high definition virtual objects as if they were lying above the table surface. The touch-enabled surface, using a multi-touch frame capable of detecting up to 10 touches, allows interacting with virtual models through gestures.

With this semi-immersive environment, users can explore 3D virtual models of buildings using both hands. To manipulate the content on the stereoscopic surface the visitor can use several touches to interact with each model at a time. We developed a finger-cluster interaction method, which allows users to move, rotate and uniformly scale the models. By dragging the fingers anywhere along the surface, the model will move in the direction of their movement. To rotate the model along the axis perpendicular to the surface, people need to apply a rotational movement via at least two fingers, but they can use their entire hand if they want. By changing the relative position of all fingers a user can uniformly scale the object. If the distance between the fingers and their center changes, the model scale will increase or decrease accordingly. This technique uses the well-known algorithm Rotate-N-Translate (Hancock et al., 2006), available in almost every modern multi-touch device.

(from Interactive Tabletops for Architectural Visualization: Combining Stereoscopy and Touch Interfaces for Cultural Heritage. In Proceedings of the 32nd eCAADe Conference)

Advertisements

Advertisements