LaGeR workbench: A language and framework for the representation and implementation of device-agnostic gestural interactions in 2D and 3D

The recent rise of virtual and augmented reality applications, ambient intelligence, as well as video games have encouraged the proliferation of gestural input devices such as the Razer Hydra, Leap Motion Controller, and Kinect 3D. Because these devices do not relay data in a standard format, applic...

Descripción completa

Autores Principales: Mata-Montero, Erick, Odio-Vivi, Andrés
Formato: Objeto de conferencia
Idioma: Inglés
Publicado: Springer Verlag 2017
Materias:
API
Acceso en línea: https://link.springer.com/chapter/10.1007%2F978-3-319-26401-1_31
https://hdl.handle.net/2238/6947
Sumario: The recent rise of virtual and augmented reality applications, ambient intelligence, as well as video games have encouraged the proliferation of gestural input devices such as the Razer Hydra, Leap Motion Controller, and Kinect 3D. Because these devices do not relay data in a standard format, application developers are forced to use a different Application Programming Interface (API) for each device. The main objective of this research was to define and implement LaGeR (Language for Gesture Representation), a language for the representation and interpretation of two and three dimensional device-agnostic gestures. Through LaGeR, developers can define gestures that will then be processed regardless of the device and the APIs involved. To ease the use of LaGeR, a LaGeR Workbench was developed as a set of tools and software libraries to convert gestures into LaGeR strings, recognize those strings as gestures, visualize the originating gestures in 3D, and communicate those detections to subscribing programs. In addition, LaGeR’s effectiveness was validated through experiments in which LaGeR Workbench was used to give users control of representative functionality of the Google Chrome web browser by using two-hand gestures with a Razer Hydra device. LaGeR was found to be simple yet expressive enough to represent gestures and develop gesture-based device-agnostic applications. © Springer International Publishing Switzerland 2015.