Project

Objectives

The ambition of multiTOUCH is to train a cohort of scientific researchers that can work in the R&D department of companies of the digital economy, and that have the skills and will to create devices and applications accessible to everyone.

multiTOUCH will explore how tactile feedback can be integrated with auditory and visual feedback in next- generation multisensory human-computer interfaces (HCI) combining tactile, auditory and visual feedback, such as multisensory tactile displays (TD) and multisensory virtual reality (VR) setups, with the aim of producing an enriched user experience. As compared to the extensive knowledge on how vision and audition interact, current knowledge on how touch integrates with the other senses is relatively scarce, especially in conditions of active touch, i.e. when tactile input is generated by active contact with the environment (e.g. tactile exploration of the surface of a display, tactile exploration of VR environments).

The question of introducing more multimodal haptic feedback into consumer products is becoming crucial today, with the advent of a society increasingly focused on digital solutions. Smartphones and tablets, which rely entirely on touch screens for user interaction, are now commonly used to access to the internet. These touch screens have become so cost effective that physical buttons and knobs are removed and replaced by virtual buttons displayed on the flat and hard surface of displays. Paradoxically, the devices are now accessible to more people, but a segment of the population is excluded from these digital developments: elderly individuals who struggle to use touch screens, and visually- or auditory-impaired individuals. Indeed, computers and other devices provide information to users almost exclusively through visual and auditory feedback. Technological efforts have focused on optimizing vision- and audition-based technologies, and on the coupling between these two senses and their interactions. We believe that introducing more haptic into HCI (VR or not) will improve the acceptability of new digital devices and applications.

Research

Through multiTOUCH, we aim to train a multidisciplinary team of young researchers that will tackle these research objectives encompassing several fields of research:

  • Systems and cognitive neurosciences: To understand, using psychophysical and neurophysiological measurements, how the brain integrates tactile information with auditory and visual information during active touch and, thereby, define the requirements to produce perceptually-coherent auditory-tactile and visual-tactile feedback in multisensory HCIs (multisensory TDs and VR setups).
  • Engineering and physics: To provide innovative and robust devices that allow multisensory feedback to the user, including their control and the necessary instrumentation hardware for its implementation in next-generation multisensory HCIs (multisensory TDs and VR setups).
  • Computer science : To develop interaction techniques that take benefit of multisensory HCIs, to evaluate them into applications for the general public (consumer applications), as well as for the healthcare sector (medical applications for rehabilitation).

WP1 (Models of multisensory integration and substitution)

This Work Package will focus on understanding and exploiting the mechanisms of tactile-auditory and tactile-visual integration to reinforce the multisensory perceptual experience produced by tactile-auditory-visual multisensory HCIs, and explore the substitution of visual, auditory or tactile feedback by feedback originating from another sense.

D1.1 Results of psychophysics and EEG experiments aiming at assessing how concurrent auditory/visual/tactile inputs can reinforce or disrupt the haptic representation of shapes and textures
D1.2 Requirements (temporal synchronization, spatial congruency) to produce perceptually- coherent auditory-tactile and visual-tactile feedback using multisensory HCIs
D1.3 Model of the multisensory reinforcement/substitution
D1.4 Comparison of results with individuals with/without visual or auditive impairment

WP2 (Technological bricks)

This Work Package will provide the hardware and software technological building blocks to the project. It will devise ways to implement the recommendations of WP1 in new devices, give the software implementation of these recommendations, including the development of a multisensory haptic framework.

D2.1 Design of multimodal haptic surfaces, including their sensors
D2.2 Multimodal haptic library for textures
D2.3 Optimized multimodal haptic interfaces
D2.4 Framework for multimodal haptic in VR
D2.5 Multimodal haptic framework

WP3 (Applications)

This Work Package will focus on developing practical applications of the project outcomes, thereby achieving the project’s showcase. The performance of multimodal HCIs following the guidelines of WP1, as well as their practical implementation, will be evaluated in real-life situations, i.e. outside the lab in the domain of (1) HCI, (2) Rehabilitation, (3) VR.

D3.1 Evaluation of eyes-free interaction techniques in 2 tasks applications
D3.2 Evaluation of multimodal haptic for rehabilitation and assistive technologies
D3.3 HCIs in VR for training
D3.4 Handbook of best practices for mixed tactile, visual, and auditory stimuli for enhancing the user experience of touch-based interaction

This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 860114