Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss

Yu-Hsiang Wu, Elizabeth Stangl, Octav Chipara, Syed Shabih Hasan, Anne Welhaven, Jacob Oleson
2018 Ear and Hearing  
Objectives: The first objective was to determine the relationship between speech level, noise level, and signal to noise ratio (SNR), as well as the distribution of SNR, in real-world situations wherein older adults with hearing loss are listening to speech. The second objective was to develop a set of prototype listening situations (PLSs) that describe the speech level, noise level, SNR, availability of visual cues, and locations of speech and noise sources of typical speech listening
more » ... s experienced by these individuals. Design: Twenty older adults with mild to moderate hearing loss carried digital recorders for 5 to 6 weeks to record sounds for 10 hours per day. They also repeatedly completed in situ surveys on smartphones several times per day to report the characteristics of their current environments, including the locations of the primary talker (if they were listening to speech) and noise source (if it was noisy) and the availability of visual cues. For surveys where speech listening was indicated, the corresponding audio recording was examined. Speech-plus-noise and noise-only segments were extracted, and the SNR was estimated using a power subtraction technique. SNRs and the associated survey data were subjected to cluster analysis to develop PLSs. Results: The speech level, noise level, and SNR of 894 listening situations were analyzed to address the first objective. Results suggested that as noise levels increased from 40 to 74 dBA, speech levels systematically increased from 60 to 74 dBA, and SNR decreased from 20 to 0 dB. Most SNRs (62.9%) of the collected recordings were between 2 and 14 dB. Very noisy situations that had SNRs below 0 dB comprised 7.5% of the listening situations. To address the second objective, recordings and survey data from 718 observations were analyzed. Cluster analysis suggested that the participants' daily listening situations could be grouped into 12 clusters (i.e., 12 PLSs). The most frequently occurring PLSs were characterized as having the talker in front of the listener with visual cues available, either in quiet or in diffuse noise. The mean speech level of the PLSs that described quiet situations was 62.8 dBA, and the mean SNR of the PLSs that represented noisy environments was 7.4 dB (speech = 67.9 dBA). A subset of observations (n = 280), which was obtained by excluding the data collected from quiet environments, was further used to develop PLSs that represent noisier situations. From this subset, two PLSs were identified. These two PLSs had lower SNRs (mean = 4.2 dB), but the most frequent situations still involved speech from in front of the listener in diffuse noise with visual cues available. Conclusions: The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss. activity categories (e.g., conversation in a group more than three people) and five environmental categories (e.g., moving traffic), resulting in 30 unique listening situations. The frequency of occurrence of each listening situation and the mean overall sound level of several frequent situations were reported. More recently, Wolters et al. (2016) developed a common sound scenarios framework using the data from the literature. Specifically, information regarding the listener's intention and task, as well as the frequency of occurrence, importance, and listening difficulty of the listening situation, was extracted or estimated from previous research. Fourteen scenarios, which are grouped into three intention categories (speech communication, focused listening, and nonspecific listening), were developed.
doi:10.1097/aud.0000000000000486 pmid:29466265 pmcid:PMC5824438 fatcat:dkunxtt7kza3nofardewqvkwku