Login
Communauté Vinci
Extérieur
Si votre nom d'utilisateur ne se termine pas par @vinci.be ou @student.vinci.be, utilisez le formulaire ci-dessous pour accéder à votre compte de lecteur.
Titre : | Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition (2020) |
Auteurs : | Gavin Bidelman ; Bonnie Brown ; Kelsey Mankel ; Caitlin Nelms Price |
Type de document : | Article |
Dans : | Ear and hearing (Vol 41, n°2, mars-avril 2020) |
Article en page(s) : | p. 268-277 |
Langues: | Anglais |
Descripteurs : |
HE Vinci Activité d'écoute ; Compréhension dans le bruit ; TrackingAutres descripteurs Perception audiovisuelle de la parole ; Relation audition vision |
Résumé : |
Objectives: In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s).
Design: Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants' gaze to different parts of a talker's face during SIN perception. Results: As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker's face had little effect on speech recognition by itself. Listeners' eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker. Conclusions: Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception. |
Disponible en ligne : | Oui |
En ligne : | https://login.ezproxy.vinci.be/login?url=https://ovidsp.ovid.com/ovidweb.cgi?T=JS&CSC=Y&NEWS=N&PAGE=fulltext&D=yrovftv&AN=00003446-202003000-00006 |