![]() In addition, many users of modern hearing aids and cochlear implants complain that their devices fail to effectively compensate for their hearing loss when they are exposed to ambiguous acoustic situations 15, 16, 17.Īll this indicates the importance of developing novel training methods and devices that can be employed to improve communication. At the same time, the prevalence of hearing loss is growing (WHO 2020), condition which if left untreated, can accelerate cognitive decline, social isolation and depression 11, 12, 13, 14. In today’s modern society challenging acoustic conditions occur increasingly more often, including exposure to multiple concurrent auditory streams and almost constant exposure to noise. With visual cues present, including both lip-reading and watching gestures, understanding speech against noise has been consistently found to improve, in both healthy individuals and in patients with hearing loss 1, 6, 7, 8, 9, 10. Healthy individuals also use lip-reading in daily face-to-face communication, and especially benefit from it when the acoustic context is ambiguous, such as when exposed to rapid speech, speech in a non-native language, background noise and/or multiple speakers talking simultaneously 3, 4, 5. Both these restrictions reduce transmission of sounds and prevent access to visual cues from speech/lip-reading, thereby further aggravating these difficulties 1, 2. Meanwhile, the COVID19 pandemic imposed on us the obligation of social distancing and wearing masks covering the mouth. Similar content being viewed by othersĪccess to speech reading and efficient integration of visual information with auditory input is essential for enhanced understanding of speech to people with hearing loss, who naturally receive degraded auditory cues from the environment. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations. long after the classical “critical periods” of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. In particular, we show that even in adulthood, i.e. We discuss the implications of these novel findings with respect to basic science. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. ![]() ![]() when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The least significant effect of both training types was found in the third test condition, i.e. ![]() After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. In addition, both before and after training most of the participants (70–80%) showed better performance (by mean 4–6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. The number of sentence repetitions needed for both types of training to complete the task was comparable. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. After a short session (30–45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14–16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. We trained two groups of non-native English speakers in understanding distorted speech in noise. The vibrations correspond to low frequencies extracted from the speech input. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. Understanding speech in background noise is challenging.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |