logo

Icsa 2017
Sep 7th to 10th, 2017
Graz, Austria

Menu

ICSA 2017 4th International Conferenceon Spatial Audio | September 7th to 10th, 2017, Graz, Austria #vdticsa

An open-source C++ library for audio spatialization and simulation of hearing loss and hearing aids

The EU-funded 3D Tune-In (http://www.3d-tune-in.eu/) project introduces a novel approach using 3D sound, visuals and gamification techniques to support people using hearing aid devices. In order to achieve a high level of realism and immersiveness within the 3D audio simulations (both speaker and headphones-based), and to allow for the emulation (within the virtual environment) of hearing aid devices and of different typologies of hearing loss, a custom open-source C++ library (the 3D Tune-In Toolkit) has been developed.The functionalities of the 3DTI Toolkit are summarised as follows: Binaural spatialisation - Efficient spatialisation of anechoic sound files is performed by convolving them with Head Related Impulse Responses (HRIRs) correspondent to the desired sources positions, and interpolated from those of a Head Related Transfer Function (HRTF) selected by the user. The Toolkit can also add an extra shadow in the contralateral ear for sources very close to the listener’s head, according to a sound propagation model. Furthermore, the ITD (Interaural Time Differences) can be re-computed to match the one of a custom head circumference inputted by the user. Distance simulation can also be performed for close and far sources. In addition to the anechoic spatialization, the 3DTI Toolkit integrates binaural reverberation capabilities by convolving anechoic sources with room impulse responses. Using a novel approach based on a low-order Ambisonic encoding, reverberation is generated for all sources at the same time, but keeping certain location-dependent characteristics. This approach, together with an efficient convolution algorithm in the frequency domain, allows the Toolkit to compute large reverberating scenes, with virtually unlimited number of sources, maintaining high spatial accuracy for the direct sound (spatialised using direct-HRTF convolution). Loudspeaker spatialisation - The 3DTI Toolkit can also perform loudspeaker-based sound spatialisation. This has been implemented using the Ambisonic technique. Multiple sources are encoded in a 2nd Order Ambisonic stream, which is then decoded for various loudspeaker configurations, allowing the user to customise each speakers’ position. The reverberation is generated using a similar approach to the one used for the binaural setup (i.e. based on virtual loudspeakers), allowing to simulate virtual environments in the Ambisonic domain with a fixed number of real-time convolutions, independently from the number of sources to be spatialised. Hearing loss simulator - This includes frequency filters (e.g. parametric and graphic equalisers), dynamic range compressor/expander, non-linear distortion and degradation of the temporal and spatial resolution. Hearing aid simulator - This includes functions such as selective amplification, high/low pass filters, dynamic equalisation, directional processing (e.g. omnidirectional and cardioid), dynamic range compression/expansion and re-quantisation (i.e. bitrate reduction). The 3DTI Toolkit can be integrated with other development tools through a series of wrappers, which include Unity (for Windows, MacOS, iOS and Android), Javascript, Pure Data, Max MSP and C++. A first version of the 3D Tune-In Toolkit Test App has been released on the project website (http://www.3d-tune-in.eu/toolkit-developers), while the open-source release (including the wrappers) will arrive before May 2018.During the conference, after an overview of the Toolkit functionalities, attendees will be invited to install the Toolkit Test App directly on their machines, and try a few demos using gyroscope and/or mobile phones as tracking devices.