SONICOM

SONICOM  (https://www.sonicom.eu/) is a 5-year research project funded under the Horizon2020 FET Proactive – Boosting emerging technologies (H2020-FETPROACT-2018-2020).

Immersive audio is our everyday experience of being able to hear and interact with sounds around us. Simulating spatially located sounds in virtual or augmented reality (VR/AR) must be done in a unique way for each individual and currently requires expensive, time-consuming individual measurements, making it commercially unfeasible. The impact of immersive audio beyond perceptual metrics such as presence and localisation is still an unexplored area of research, specifically when related with social interaction, entering the behavioural and cognitive realms.

SONICOM will revolutionise the way we interact socially within AR/VR environments and applications by leveraging Artificial Intelligence to design a new generation of immersive audio technologies and techniques, specifically looking at personalisation and customisation of the audio rendering. Using a data-driven approach it will explore, map, and model how the physical characteristics of spatialised auditory stimuli can influence observable behavioural, physiological, kinematic, and psychophysical reactions of listeners within social interaction scenarios.

The developed techniques and models will be evaluated in an ecologically valid manner, exploiting AR/VR simulations as well as real-life scenarios, developing appropriate hardware and software proofs-of-concept.

Finally, to reinforce the idea of reproducible research to promote future development and innovation in the area of auditory-based social interaction, the SONICOM Ecosystem will be created, which will include auditory data closely linked with model implementations and immersive audio rendering components.

Details

Timespan: 

January, 2021 to June, 2026

Status: 

In progress

Funded by: 

Horizon2020

Funding Type: 

European