WeVoice R&D develops the most robust and fast stress detection model based on voice biofeedback.
"Successfully validate and develop a generative sound-system powered with adaptive 3D therapies in real-time based on biofeedback (Voice-AI) for mental health"
The general objective of the project is to develop an algorithm based on Artificial Intelligence to reduce the mental health treatment gap through of an adaptive healing solution with sound, powered by Voice Emotion-AI trained with Cognitive-Behavioral Therapy, Neurolinguistic Programming and Rapid Personal Transformation, making mental health care accessible to anyone. The objective is to validate our algorithms are capable of recognizing emotions using speech analysis, to offer specific and personalized solutions through audio. The multimodal model based on data fusion - within Deep Learning models - make the combination of biofeedback, Artificial Intelligence and sound to be able to identify and discover how the structural properties of sound correlate with the response of our body, both physiologically and emotionally. Within the objectives of the project, one of the fundamental keys consists of the clinical validation of the algorithms developed and the prototypes of the therapies proposed.
WP1: Clinical analysis and generation of biomarkers in a representative population of users to validate their response to sound therapy stimuli (WeVoice & MuArts)
WP2: Automatic generation of binaural 3D audio with applications to biofeedback (WeVoice & Fundació Eurecat)
WP3: Clinical validation of the impact of sound therapies on stress reduction (WeVoice)
WP4: Design and develop an automatic system for the generation of personalized sound therapies (WeVoice)
Clinical validation of the correlation between vocal biomarkers of stress & classical biomarkers of stress:
Endocrine system response
(stress hormones dynamics)
Electro Dermal Activity
Galvanic Skin Response
Heart Rate Variability
Support in changing habits, identifying patterns and triggers.
Multimodal clinical validation:
To improve the efficiency of the sound healing therapies, we are researching to maximize the effect of the auditory beat stimulation techniques into three areas:
Neuro-biofeedback analysis: We are performing a neuro-biofeedback analysis using to validate the effect of the sound healing therapies on the human body.
Audio auto-generation: we are researching music generation models to generate new and diverse sound therapies.
Real-time therapy tuning: We are developing experiments to tune some critical variables in the sound therapies to adapt the sound healing sessions in real-time.
Multimodal Data Fusion:
We correlate the speech and language features with the neuro-biofeedback analysis to elaborate more effective sound healings programs.
Why vocal biomarkers?
Stress is an established risk factor of vocal symptoms. It was shown that smartphone-based self-assessed stress was correlated with voice features. A positive correlation between stress levels and duration of verbal interaction has also been reported. Voice symptoms seem more frequent in people with high levels of cortisol, which is common in patients with depression; therefore, voice characteristics are used to discover depression symptoms or estimate depression severity. The second dimension of a Mel-Frequency Cepstrum Coefficient (MFCC) audio signal decomposition has been shown to discriminate depressive patients from controls An automated telephone system has been successfully tested to assess biologically based vocal acoustic measures of depression severity and treatment response or to compute a post-traumatic stress disorder mental health score. Beside acoustic measures, the linguistic aspects of voice are likely to be affected in mental diseases. Discourse tends to be incoherent in schizophrenia, manifested by disjointed flow of ideas, nonsensical associations between words, or digressions from the topic. Circumstantial speech is prominent in patients with bipolar and histrionic personality disorders Recent methodological developments have also allowed for improved emotion recognition accuracy, which enables sufficient maturity to be reached for medical research to monitor patients in between visits or to gather real-life information in clinical or epidemiological studies.
In the context of voice, a vocal biomarker is a signature, a feature, or a combination of features from the audio signal of the voice that is associated with a clinical outcome and can be used to monitor patients, diagnose a condition, or grade the severity or the stages of a disease or for drug development. It has all the properties of a traditional biomarker, which are validated analytically, qualified using an evidentiary assessment, and utilized. [Fagherazzi G, Fischer A, Ismael M, Despotovic V: Voice for Health: The Use of Vocal Biomarkers from Research to Clinical Practice. Digit Biomark 2021;5:78-88]