Friday, May 17, 2019

Ear and Hearing

Background Speech Disrupts Working Memory Span in 5-Year-Old Children
imageObjectives: The present study tested the effects of background speech and nonspeech noise on 5-year-old children's working memory span. Design: Five-year-old typically developing children (range = 58.6 to 67.6 months; n = 94) completed a modified version of the Missing Scan Task, a missing-item working memory task, in quiet and in the presence of two types of background noise: male two-talker speech and speech-shaped noise. The two types of background noise had similar spectral composition and overall intensity characteristics but differed in whether they contained verbal content. In Experiments 1 and 2, children's memory span (i.e., the largest set size of items children successfully recalled) was subjected to analyses of variance designed to look for an effect of listening condition (within-subjects factor: quiet, background noise) and an effect of background noise type (between-subjects factor: two-talker speech, speech-shaped noise). Results: In Experiment 1, children's memory span declined in the presence of two-talker speech but not in the presence of speech-shaped noise. This result was replicated in Experiment 2 after accounting for a potential effect of proactive interference due to repeated administration of the Missing Scan Task. Conclusions: Background speech, but not speech-shaped noise, disrupted working memory span in 5-year-old children. These results support the idea that background speech engages domain-general cognitive processes used during the recall of known objects in a way that speech-shaped noise does not.

Objective Comparison of the Quality and Reliability of Auditory Brainstem Response Features Elicited by Click and Speech Sounds
imageObjectives: Auditory brainstem responses (ABRs) are commonly generated using simple, transient stimuli (e.g., clicks or tone bursts). While resulting waveforms are undeniably valuable clinical tools, they are unlikely to be representative of responses to more complex, behaviorally relevant sounds such as speech. There has been interest in the use of more complex stimuli to elicit the ABR, with considerable work focusing on the use of synthetically generated consonant–vowel (CV) stimuli. Such responses may be sensitive to a range of clinical conditions and to the effects of auditory training. Several ABR features have been documented in response to CV stimuli; however, an important issue is how robust such features are. In the current research, we use time- and frequency-domain objective measures of quality to compare the reliability of Wave V of the click-evoked ABR to that of waves elicited by the CV stimulus /da/. Design: Stimuli were presented to 16 subjects at 70 dB nHL in quiet for 6000 epochs. The presence and quality of response features across subjects were examined using Fsp and a Bootstrap analysis method, which was used to assign p values to ABR features for individual recordings in both time and frequency domains. Results: All consistent peaks identified within the /da/-evoked response had significantly lower amplitude than Wave V of the ABR. The morphology of speech-evoked waveforms varied across subjects. Mean Fsp values for several waves of the speech-evoked ABR were below 3, suggesting low quality. The most robust response to the /da/ stimulus appeared to be an offset response. Only click-evoked Wave V showed 100% wave presence. Responses to the /da/ stimulus showed lower wave detectability. Frequency-domain analysis showed stronger and more consistent activity evoked by clicks than by /da/. Only the click ABR had consistent time–frequency domain features across all subjects. Conclusions: Based on the objective analysis used within this investigation, it appears that the quality of speech-evoked ABR is generally less than that of click-evoked responses, although the quality of responses may be improved by increasing the number of epochs or the stimulation level. This may have implications for the clinical use of speech-evoked ABR.

Working Memory and Extended High-Frequency Hearing in Adults: Diagnostic Predictors of Speech-in-Noise Perception
imageObjective: The purpose of this study was to identify the main factors that differentiate listeners with clinically normal or "near-normal" hearing with regard to their speech-in-noise perception and to develop a regression model to predict speech-in-noise difficulties in this population. We also aimed to assess the potential effectiveness of the formula produced by the regression model as a "diagnostic criterion" for clinical use. Design: Data from a large-scale behavioral study investigating the relationship between noise exposure and auditory processing in 122 adults (30 to 57 years) was re-examined. For each participant, a composite speech-in-noise score (CSS) was calculated based on scores from three speech-in-noise measures, (a) the Speech, Spatial and Qualities of Hearing scale (average of speech items); (b) the Listening in Spatialized Noise Sentences test (high-cue condition); and (c) the National Acoustic Laboratories Dynamic Conversations Test. Two subgroups were created based on the CSS, each comprising 30 participants: those with the lowest scores and those with the highest scores. These two groups were compared for differences in hearing thresholds, temporal perception, noise exposure, attention, and working memory. They differed significantly on age, low-, high-, and extended high-frequency (EHF) hearing level, sensitivity to temporal fine structure and amplitude modulation, linguistic closure skills, attention, and working memory. A multiple linear regression model was fit with these nine variables as predictors to determine their relative effect on the CSS. The two significant predictors, EHF hearing and working memory, from this regression were then used to fit a second smaller regression model. The resulting regression formula was assessed for its usefulness as a "diagnostic criterion" for predicting speech-in-noise difficulties using Monte Carlo cross-validation (root mean square error and area under the receiver operating characteristics curve methods) in the complete data set. Results: EHF hearing thresholds (p = 0.01) and working memory scores (p < 0.001) were significant predictors of the CSS and the regression model accounted for 41% of the total variance [R2 = 0.41, F(9,112) = 7.57, p < 0.001]. The overall accuracy of the diagnostic criterion for predicting the CSS and for identifying "low" CSS performance, using these two factors, was reasonable (area under the receiver operating characteristics curve = 0.76; root mean square error = 0.60). Conclusions: These findings suggest that both peripheral (auditory) and central (cognitive) factors contribute to the speech-in-noise difficulties reported by normal hearing adults in their mid-adult years. The demonstrated utility of the diagnostic criterion proposed here suggests that audiologists should include assessment of EHF hearing and working memory as part of routine clinical practice with this population. The "diagnostic criterion" we developed based on these two factors could form the basis of future clinical tests and rehabilitation tools and be used in evidence-based counseling for normal hearers who present with unexplained communication difficulties in noise.

Time From Hearing Aid Candidacy to Hearing Aid Adoption: A Longitudinal Cohort Study
imageObjectives: Although many individuals with hearing loss could benefit from intervention with hearing aids, many do not seek or delay seeking timely treatment after the onset of hearing loss. There is limited data-based evidence estimating the delay in adoption of hearing aids with anecdotal estimates ranging from 5 to 20 years. The present longitudinal study is the first to assess time from hearing aid candidacy to adoption in a 28-year ongoing prospective cohort of older adults, with the additional goal of determining factors influencing delays in hearing aid adoption, and self-reported successful use of hearing aids. Design: As part of a longitudinal study of age-related hearing loss, a wide range of demographic, biologic, and auditory measures are obtained yearly or every 2 to 3 years from a large sample of adults, along with family, medical, hearing, noise exposure, and hearing aid use histories. From all eligible participants (age ≥18; N = 1530), 857 were identified as hearing aid candidates either at baseline or during their participation, using audiometric criteria. Longitudinal data were used to track transition to hearing aid candidacy and hearing aid adoption. Demographic and hearing-related characteristics were compared between hearing aid adopters and nonadopters. Unadjusted estimated overall time (in years) to hearing aid adoption and estimated delay times were stratified by demographic and hearing-related factors and were determined using a time-to-event analysis (survival analysis). Factors influencing rate of adoption in any given time period were examined along with factors influencing successful hearing aid adoption. Results: Age, number of chronic health conditions, sex, retirement status, and education level did not differ significantly between hearing aid adopters and nonadopters. In contrast, adopters were more likely than nonadopters to be married, of white race, have higher socioeconomic status, have significantly poorer higher frequency (2.0, 3.0, 4.0, 6.0, and 8.0 kHz) pure-tone averages, poorer word recognition in quiet and competing multi-talker babble, and reported more hearing handicap on the Hearing Handicap Inventory for the Elderly/Adults emotional and social subscales. Unadjusted estimation of time from hearing aid candidacy to adoption in the full participant cohort was 8.9 years (SE ± 0.37; interquartile range = 3.2–14.9 years) with statistically significant stratification for race, hearing as measured by low- and high-frequency pure-tone averages, keyword recognition in low-context sentences in babble, and the Hearing Handicap Inventory for the Elderly/Adults social score. In a subgroup analysis of the 213 individuals who adopted hearing aids and were assigned a success classification, 78.4% were successful. No significant predictors of success were found. Conclusions: The average delay in adopting hearing aids after hearing aid candidacy was 8.9 years. Nonwhite race and better speech recognition (in a more difficult task) significantly increased the delay to treatment. Poorer hearing and more self-assessed hearing handicap in social situations significantly decreased the delay to treatment. These results confirm the assumption that adults with hearing loss significantly delay seeking treatment with hearing aids.

Voice Emotion Recognition by Children With Mild-to-Moderate Hearing Loss
imageObjectives: Emotional communication is important in children's social development. Previous studies have shown deficits in voice emotion recognition by children with moderate-to-severe hearing loss or with cochlear implants. Little, however, is known about emotion recognition in children with mild-to-moderate hearing loss. The objective of this study was to compare voice emotion recognition by children with mild-to-moderate hearing loss relative to their peers with normal hearing, under conditions in which the emotional prosody was either more or less exaggerated (child-directed or adult-directed speech, respectively). We hypothesized that the performance of children with mild-to-moderate hearing loss would be comparable to their normally hearing peers when tested with child-directed materials but would show significant deficits in emotion recognition when tested with adult-directed materials, which have reduced prosodic cues. Design: Nineteen school-aged children (8 to 14 years of age) with mild-to-moderate hearing loss and 20 children with normal hearing aged 6 to 17 years participated in the study. A group of 11 young, normally hearing adults was also tested. Stimuli comprised sentences spoken in one of five emotions (angry, happy, sad, neutral, and scared), either in a child-directed or in an adult-directed manner. The task was a single-interval, five-alternative forced-choice paradigm, in which the participants heard each sentence in turn and indicated which of the five emotions was associated with that sentence. Reaction time was also recorded as a measure of cognitive load. Results: Acoustic analyses confirmed the exaggerated prosodic cues in the child-directed materials relative to the adult-directed materials. Results showed significant effects of age, specific emotion (happy, sad, etc.), and test materials (better performance with child-directed materials) in both groups of children, as well as susceptibility to talker variability. Contrary to our hypothesis, no significant differences were observed between the 2 groups of children in either emotion recognition (percent correct or d' values) or in reaction time, with either child- or adult-directed materials. Among children with hearing loss, degree of hearing loss (mild or moderate) did not predict performance. In children with hearing loss, interactions between vocabulary, materials, and age were observed, such that older children with stronger vocabulary showed better performance with child-directed speech. Such interactions were not observed in children with normal hearing. The pattern of results was broadly consistent across the different measures of accuracy, d', and reaction time. Conclusions: Children with mild-to-moderate hearing loss do not have significant deficits in overall voice emotion recognition compared with their normally hearing peers, but mechanisms involved may be different between the 2 groups. The results suggest a stronger role for linguistic ability in emotion recognition by children with normal hearing than by children with hearing loss.

Is There a Safe Level for Recording Vestibular Evoked Myogenic Potential? Evidence From Cochlear and Hearing Function Tests
imageObjective: There is a growing concern among the scientific community about the possible detrimental effects of signal levels used for eliciting vestibular evoked myogenic potentials (VEMPs) on hearing. A few recent studies showed temporary reduction in amplitude of otoacoustic emissions (OAE) after VEMP administration. Nonetheless, these studies used higher stimulus levels (133 and 130 dB peak equivalent sound pressure level [pe SPL]) than the ones often used (120 to 125 dB pe SPL) for clinical recording of VEMP. Therefore, it is not known whether these lower levels also have similar detrimental impact on hearing function. Hence, the present study aimed at investigating the effect of 500 Hz tone burst presented at 125 dB pe SPL on hearing functions. Design: True experimental design, with an experimental and a control group, was used in this study. The study included 60 individuals with normal auditory and vestibular system. Of them, 30 underwent unilateral VEMP recording (group I) while the remaining 30 did not undergo VEMP testing (group II). Selection of participants to the groups was random. Pre- and post-VEMP assessments included pure-tone audiometry (250 to 16,000 Hz), distortion product OAE, and subjective symptoms. To simulate the time taken for VEMP testing in group I, participants in group II underwent these tests twice with a gap of 15 minutes. Results: No participant experienced any subjective symptom after VEMP testing. There was no significant interear and intergroup difference in pure-tone thresholds and distortion product OAE amplitude before and after VEMP recording (p > 0.05). Furthermore, the response rate of cervical VEMP was 100% at stimulus intensity of 125 dB pe SPL. Conclusions: Use of 500 Hz tone burst at 125 dB pe SPL does not cause any temporary or permanent changes in cochlear function and hearing, yet produces 100% response rate of cervical VEMP in normal-hearing young adults. Therefore, 125 dB pe SPL of 500 Hz tone burst is recommended as safe level for obtaining cervical VEMP without significantly losing out on its response rate, at least in normal-hearing young adults.

Bimodal Hearing or Bilateral Cochlear Implants? Ask the Patient
imageObjective: The objectives of this study were to assess the effectiveness of various measures of speech understanding in distinguishing performance differences between adult bimodal and bilateral cochlear implant (CI) recipients and to provide a preliminary evidence-based tool guiding clinical decisions regarding bilateral CI candidacy. Design: This study used a multiple-baseline, cross-sectional design investigating speech recognition performance for 85 experienced adult CI recipients (49 bimodal, 36 bilateral). Speech recognition was assessed in a standard clinical test environment with a single loudspeaker using the minimum speech test battery for adult CI recipients as well as with an R-SPACETM 8-loudspeaker, sound-simulation system. All participants were tested in three listening conditions for each measure including each ear alone as well as in the bilateral/bimodal condition. In addition, we asked each bimodal listener to provide a yes/no answer to the question, "Do you think you need a second CI?" Results: This study yielded three primary findings: (1) there were no significant differences between bimodal and bilateral CI performance or binaural summation on clinical measures of speech recognition, (2) an adaptive speech recognition task in the R-SPACETM system revealed significant differences in performance and binaural summation between bimodal and bilateral CI users, with bilateral CI users achieving significantly better performance and greater summation, and (3) the patient's answer to the question, "Do you think you need a second CI?" held high sensitivity (100% hit rate) for identifying likely bilateral CI candidates and moderately high specificity (77% correct rejection rate) for correctly identifying listeners best suited with a bimodal hearing configuration. Conclusions: Clinics cannot rely on current clinical measures of speech understanding, with a single loudspeaker, to determine bilateral CI candidacy for adult bimodal listeners nor to accurately document bilateral benefit relative to a previous bimodal hearing configuration. Speech recognition in a complex listening environment, such as R-SPACETM, is a sensitive and appropriate measure for determining bilateral CI candidacy and also likely for documenting bilateral benefit relative to a previous bimodal configuration. In the absence of an available R-SPACETM system, asking the patient whether or not s/he thinks s/he needs a second CI is a highly sensitive measure, which may prove clinically useful.

Effects of Early Auditory Deprivation on Working Memory and Reasoning Abilities in Verbal and Visuospatial Domains for Pediatric Cochlear Implant Recipients
imageObjectives: The overall goal of this study was to compare verbal and visuospatial working memory in children with normal hearing (NH) and with cochlear implants (CI). The main questions addressed by this study were (1) Does auditory deprivation result in global or domain-specific deficits in working memory in children with CIs compared with their NH age mates? (2) Does the potential for verbal recoding affect performance on measures of reasoning ability in children with CIs relative to their NH age mates? and (3) Is performance on verbal and visuospatial working memory tasks related to spoken receptive language level achieved by children with CIs? Design: A total of 54 children ranging in age from 5 to 9 years participated; 25 children with CIs and 29 children with NH. Participants were tested on both simple and complex measures of verbal and visuospatial working memory. Vocabulary was assessed with the Peabody Picture Vocabulary Test (PPVT) and reasoning abilities with two subtests of the WISC-IV (Wechsler Intelligence Scale for Children, 4th edition): Picture Concepts (verbally mediated) and Matrix Reasoning (visuospatial task). Groups were compared on all measures using analysis of variance after controlling for age and maternal education. Results: Children with CIs scored significantly lower than children with NH on measures of working memory, after accounting for age and maternal education. Differences between the groups were more apparent for verbal working memory compared with visuospatial working memory. For reasoning and vocabulary, the CI group scored significantly lower than the NH group for PPVT and WISC Picture Concepts but similar to NH age mates on WISC Matrix Reasoning. Conclusions: Results from this study suggest that children with CIs have deficits in working memory related to storing and processing verbal information in working memory. These deficits extend to receptive vocabulary and verbal reasoning and remain even after controlling for the higher maternal education level of the NH group. Their ability to store and process visuospatial information in working memory and complete reasoning tasks that minimize verbal labeling of stimuli more closely approaches performance of NH age mates.

Music Appreciation of Adult Hearing Aid Users and the Impact of Different Levels of Hearing Loss
imageObjectives: The main aim of this study was to collect information on music listening and music appreciation from postlingually deafened adults who use hearing aids (HAs). It also sought to investigate whether there were any differences in music ratings from HA users with different levels of hearing loss (HL; mild, versus moderate to moderately-severe, versus severe or worse. Design: An existing published questionnaire developed for cochlear implant recipients was modified for this study. It had 51 questions divided into seven sections: (1) music listening and music background; (2) sound quality; (3) musical styles; (4) music preferences; (5) music recognition; (6) factors affecting music listening enjoyment; and (7) music training program. The questionnaire was posted out to adult HA users, who were subsequently divided into three groups: (i) HA users with a mild HL (Mild group); (ii) HA users with a moderate to moderately-severe HL (Moderate group); and (iii) HA users with a severe or worse (Severe group) HL. Results: One hundred eleven questionnaires were completed; of these, 51 participants had a mild HL, 42 had a moderate to moderately-severe loss, and 18 a severe or worse loss. Overall, there were some significant differences noted, predominantly between the Mild and Severe groups, with fewer differences between the Mild and Moderate groups. The respondents with the greater levels of HL reported a greater reduction in their music enjoyment as a result of their HL and that HAs made music sound significantly less melodic for them. It was also observed that the Severe group's mean scores for both the pleasant rating as well as the combined rating for the six different musical styles were lower than both the Mild and Moderate groups' ratings for every style, with just one exception (pop/rock pleasantness rating). There were significant differences between the three groups for the styles of music that were reported to sound the best with HA(s), as well as differences between the ratings on more specific timbre rating scales used to rate different elements of each style. In rating the pleasantness and naturalness of different musical instruments or instrumental groups, there was no difference between the groups. There were also significant differences between the Mild and Severe groups in relation to musical preferences for the pitch range of music, with the Severe group significantly preferring male singers and lower pitched instruments. Conclusions: The overall results indicated little difference in music appreciation between those with a mild versus moderate loss. However, poorer appreciation scores were given by those with a severe or worse HL. This would suggest that HAs or HL have a negative impact on music listening, particularly when the HL becomes more significant. There was a large degree of variability in ratings, though, with music listening being satisfactory for some listeners and largely unsatisfactory for others, in all three groups. Music listening preferences also varied significantly, and the reported benefit (or otherwise) provided by the HA for music was also mixed. The overriding variability in listening preferences and ratings leads to the question as to the benefit and effectiveness of generic, manufacturer-derived music programs on HAs. Despite the heterogeneity in the listening habits, preferences, and ratings, it is clear that music appreciation and enjoyment is still challenging for many HA users and that level of HL is one, but not the only factor that impacts on music appreciation.

Redundant Information Is Sometimes More Beneficial Than Spatial Information to Understand Speech in Noise
imageObjectives: To establish a framework to unambiguously define and relate the different spatial effects in speech understanding: head shadow, redundancy, squelch, spatial release from masking (SRM), and so on. Next, to investigate the contribution of interaural time and level differences to these spatial effects in speech understanding and how this is influenced by the type of masking noise. Design: In our framework, SRM is uniquely characterized as a linear combination of head shadow, binaural redundancy, and binaural squelch. The latter two terms are combined into one binaural term, which we define as binaural contrast: a benefit of interaural differences. In this way, SRM is a simple sum of a monaural and a binaural term. We used the framework to quantify these spatial effects in 10 listeners with normal hearing. The participants performed speech intelligibility tasks in different spatial setups. We used head-related transfer functions to manipulate the presence of interaural time and level differences. We used three spectrally matched masker types: stationary speech-weighted noise, a competing talker, and speech-weighted noise that was modulated with the broadband temporal envelope of the competing talker. Results: We found that (1) binaural contrast was increased by interaural time differences, but reduced by interaural level differences, irrespective of masker type, and (2) large redundancy (the benefit of having identical information in two ears) could reduce binaural contrast and thus also reduce SRM. Conclusions: Our framework yielded new insights in binaural processing in speech intelligibility. First, interaural level differences disturb speech intelligibility in realistic listening conditions. Therefore, to optimize speech intelligibility in hearing aids, it is more beneficial to improve monaural signal-to-noise ratios rather than to preserve interaural level differences. Second, although redundancy is mostly ignored when considering spatial hearing, it might explain reduced SRM in some cases.

Use of Commercial Virtual Reality Technology to Assess Verticality Perception in Static and Dynamic Visual Backgrounds
Objectives: The Subjective Visual Vertical (SVV) test and the closely related Rod and Disk Test (RDT) are measures of perceived verticality measured in static and dynamic visual backgrounds. However, the equipment used for these tests is variable across clinics and is often too expensive or too primitive to be appropriate for widespread use. Commercial virtual reality technology, which is now widely available, may provide a more suitable alternative for collecting these measures in clinical populations. This study was designed to investigate verticality perception in symptomatic patients using a modified RDT paradigm administered through a head-mounted display (HMD). Design: A group of adult patients referred by a physician for vestibular testing based on the presence of dizziness symptoms and a group of healthy adults without dizziness symptoms were included. We investigated degree of visual dependence in both groups by measuring SVV as a function of kinematic changes to the visual background. Results: When a dynamic background was introduced into the HMD to simulate the RDT, significantly greater shifts in SVV were found for the patient population than for the control population. In patients referred for vestibular testing, the SVV measured with the HMD was significantly correlated with traditional measures of SVV collected in a rotary chair when accounting for head tilt. Conclusions: This study provides initial proof of concept evidence that reliable SVV measures in static and dynamic visual backgrounds can be obtained using a low-cost commercial HMD system. This initial evidence also suggests that this tool can distinguish individuals with dizziness symptomatology based on SVV performance in dynamic visual backgrounds. Acknowledgment: The work was supported by Defense Health Affairs in support of the Army Hearing Program. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this presentation are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received March 27, 2018; accepted March 2, 2019. Address for correspondence: Ashley Zaleski-King, Walter Reed National Military Medical Center (WRNMMC), 8901 Rockville Pike, Bethesda, MD 20889, USA. E-mail: ashley.c.king8.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Predicting Speech-in-Noise Deficits from the Audiogram
Objectives: In occupations that involve hearing critical tasks, individuals need to undergo periodic hearing screenings to ensure that they have not developed hearing losses that could impair their ability to safely and effectively perform their jobs. Most periodic hearing screenings are limited to pure-tone audiograms, but in many cases, the ability to understand speech in noisy environments may be more important to functional job performance than the ability to detect quiet sounds. The ability to use audiometric threshold data to identify individuals with poor speech-in-noise performance is of particular interest to the U.S. military, which has an ongoing responsibility to ensure that its service members (SMs) have the hearing abilities they require to accomplish their mission. This work investigates the development of optimal strategies for identifying individuals with poor speech-in-noise performance from the audiogram. Design: Data from 5487 individuals were used to evaluate a range of classifiers, based exclusively on the pure-tone audiogram, for identifying individuals who have deficits in understanding speech in noise. The classifiers evaluated were based on generalized linear models (GLMs), the speech intelligibility index (SII), binary threshold criteria, and current standards used by the U.S. military. The classifiers were evaluated in a detection theoretic framework where the sensitivity and specificity of the classifiers were quantified. In addition to the performance of these classifiers for identifying individuals with deficits understanding speech in noise, data from 500,733 U.S. Army SMs were used to understand how the classifiers would affect the number of SMs being referred for additional testing. Results: A classifier based on binary threshold criteria that was identified through an iterative search procedure outperformed a classifier based on the SII and ones based on GLMs with large numbers of fitted parameters. This suggests that the saturating nature of the SII is important, but that the weights of frequency channels are not optimal for identifying individuals with deficits understanding speech in noise. It is possible that a highly complicated model with many free parameters could outperform the classifiers considered here, but there was only a modest difference between the performance of a classifier based on a GLM with 26 fitted parameters and one based on a simple all-frequency pure-tone average. This suggests that the details of the audiogram are a relatively insensitive predictor of performance in speech-in-noise tasks. Conclusions: The best classifier identified in this study, which was a binary threshold classifier derived from an iterative search process, does appear to reliably outperform the current thresholds criteria used by the U.S. military to identify individuals with abnormally poor speech-in-noise performance, both in terms of fewer false alarms and a greater hit rate. Substantial improvements in the ability to detect SMs with impaired speech-in-noise performance can likely only be obtained by adding some form of speech-in-noise testing to the hearing monitoring program. While the improvements were modest, the overall benefit of adopting the proposed classifier is likely substantial given the number of SMs enrolled in U.S. military hearing conservation and readiness programs. ACKNOWLEDGMENTS: The authors thank Dr. Gary Kidd for sharing his TDT data and Dr. Ken Grant for sharing his SPRINT data. The authors also thank Kari Buchanan and the Hearing Center of Excellence for sharing the DOEHRS-HC data. All authors contributed equally to this work. All authors were involved in the data analysis and discussed the results and implications and commented on the manuscript at all stages. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received December 3, 2017; accepted March 14, 2019. Address for correspondence: Daniel E. Shub, National Military Audiology and Speech Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, MD 20889, USA. E-mail: daniel.e.shub.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Children With Normal Hearing Are Efficient Users of Fundamental Frequency and Vocal Tract Length Cues for Voice Discrimination
Background: The ability to discriminate between talkers assists listeners in understanding speech in a multitalker environment. This ability has been shown to be influenced by sensory processing of vocal acoustic cues, such as fundamental frequency (F0) and formant frequencies that reflect the listener's vocal tract length (VTL), and by cognitive processes, such as attention and memory. It is, therefore, suggested that children who exhibit immature sensory and/or cognitive processing will demonstrate poor voice discrimination (VD) compared with young adults. Moreover, greater difficulties in VD may be associated with spectral degradation as in children with cochlear implants. Objectives: The aim of this study was as follows: (1) to assess the use of F0 cues, VTL cues, and the combination of both cues for VD in normal-hearing (NH) school-age children and to compare their performance with that of NH adults; (2) to assess the influence of spectral degradation by means of vocoded speech on the use of F0 and VTL cues for VD in NH children; and (3) to assess the contribution of attention, working memory, and nonverbal reasoning to performance. Design: Forty-one children, 8 to 11 years of age, were tested with nonvocoded stimuli. Twenty-one of them were also tested with eight-channel, noise-vocoded stimuli. Twenty-one young adults (18 to 35 years) were tested for comparison. A three-interval, three-alternative forced-choice paradigm with an adaptive tracking procedure was used to estimate the difference limens (DLs) for VD when F0, VTL, and F0 + VTL were manipulated separately. Auditory memory, visual attention, and nonverbal reasoning were assessed for all participants. Results: (a) Children' F0 and VTL discrimination abilities were comparable to those of adults, suggesting that most school-age children utilize both cues effectively for VD. (b) Children's VD was associated with trail making test scores that assessed visual attention abilities and speed of processing, possibly reflecting their need to recruit cognitive resources for the task. (c) Best DLs were achieved for the combined (F0 + VTL) manipulation for both children and adults, suggesting that children at this age are already capable of integrating spectral and temporal cues. (d) Both children and adults found the VTL manipulations more beneficial for VD compared with the F0 manipulations, suggesting that formant frequencies are more reliable for identifying a specific speaker than F0. (e) Poorer DLs were achieved with the vocoded stimuli, though the children maintained similar thresholds and pattern of performance among manipulations as the adults. Conclusions: The present study is the first to assess the contribution of F0, VTL, and the combined F0 + VTL to the discrimination of speakers in school-age children. The findings support the notion that many NH school-age children have effective spectral and temporal coding mechanisms that allow sufficient VD, even in the presence of spectrally degraded information. These results may challenge the notion that immature sensory processing underlies poor listening abilities in children, further implying that other processing mechanisms contribute to their difficulties to understand speech in a multitalker environment. These outcomes may also provide insight into VD processes of children under listening conditions that are similar to cochlear implant users. ACKNOWLEDGMENTS: The authors wish to acknowledge the contribution of the following undergraduate students from the Department of Communication Disorders at Tel Aviv University for assisting in data collection: Feigi Raiter, Feigi Grinvald, Shani Rabia, Adi Amsalem, Miri Rotem, Lea pantiat, Daniel Lex Rabinovitch, and Orpaz Shariki. The authors wish to thank Steyer grant (School of Health Professions, Tel-Aviv University) for their financial support. The authors specially thank all the adults and children who participated in the present study. All authors contributed to this work to a significant extent. All authors have read the article and agreed to submit it for publication after discussing the results and implications and commented on the article at all stages. All authors are, therefore, responsible for the reported research and have approved the final article as submitted. The authors have no conflicts of interest to declare. Received September 30, 2018; accepted March 17, 2019. Address for correspondence: Yael Zaltz, Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel. E-mail: yaelzaltz@gmail.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Switching Streams Across Ears to Evaluate Informational Masking of Speech-on-Speech
Objectives: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically). However, this also reduces IM by providing listeners with lateralization cues that support spatial release from masking (SRM). In tonal sequences, IM can be isolated by rapidly switching the lateralization of dichotic target and masker streams across the ears, presumably producing ambiguous spatial percepts that interfere with SRM. However, it is not clear whether this technique works with speech materials. Design: Speech reception thresholds (SRTs) were measured in 17 young normal-hearing adults for sentences produced by a female talker in the presence of a competing male talker under three different conditions: diotic (target and masker in both ears), dichotic, and dichotic but switching the target and masker streams across the ears. Because switching rate and signal coherence were expected to influence the amount of IM observed, these two factors varied across conditions. When switches occurred, they were either at word boundaries or periodically (every 116 msec) and either with or without a brief gap (84 msec) at every switch point. In addition, SRTs were measured in a quiet condition to rule out audibility as a limiting factor. Results: SRTs were poorer for the four switching dichotic conditions than for the nonswitching dichotic condition, but better than for the diotic condition. Periodic switches without gaps resulted in the worst SRTs compared to the other switch conditions, thus maximizing IM. Conclusions: These findings suggest that periodically switching the target and masker streams across the ears (without gaps) was the most efficient in disrupting SRM. Thus, this approach can be used in experiments that seek a relatively pure measure of IM, and could be readily extended to translational research. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Rachel Ellinger and Andrea Cunningham for their help with data collection. This work was supported by NIH R01 DC 60014 grant awarded to P. S., and an iCARE ITN (FP7-607139) European fellowship to A. C. The authors have no conflict of interest to disclose. Received June 4, 2018; accepted March 17, 2019. Address for correspondence: Axelle Calcus, Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France. E-mail: axelle.calcus@ens.fr Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Genetic Inheritance of Late-Onset, Down-Sloping Hearing Loss and Its Implications for Auditory Rehabilitation
Objectives: Late-onset, down-sloping sensorineural hearing loss has many genetic and nongenetic etiologies, but the proportion of this commonly encountered type of hearing loss attributable to genetic causes is not well known. In this study, the authors performed genetic analysis using next-generation sequencing techniques in patients showing late-onset, down-sloping sensorineural hearing loss with preserved low-frequency hearing, and investigated the clinical implications of the variants identified. Design: From a cohort of patients with hearing loss at a tertiary referral hospital, 18 unrelated probands with down-sloping sensorineural hearing loss of late onset were included in this study. Down-sloping hearing loss was defined as a mean low-frequency threshold at 250 Hz and 500 Hz less than or equal to 40 dB HL and a mean high-frequency threshold at 1, 2, and 4 kHz greater than 40 dB HL. The authors performed whole-exome sequencing and segregation analysis to identify the genetic causes and evaluated the outcomes of auditory rehabilitation in the patients. Results: There were nine simplex and nine multiplex families included, in which the causative variants were found in six of 18 probands, demonstrating a detection rate of 33.3%. Various types of variants, including five novel and three known variants, were detected in the MYH14, MYH9, USH2A, COL11A2, and TMPRSS3 genes. The outcome of cochlear and middle ear implants in patients identified with pathogenic variants was satisfactory. There was no statistically significant difference between pathogenic variant-positive and pathogenic variant-negative groups in terms of onset age, family history of hearing loss, pure-tone threshold, or speech discrimination scores. Conclusions: The proportion of patients with late-onset, down-sloping hearing loss identified with potentially causative variants was unexpectedly high. Identification of the causative variants will offer insights on hearing loss progression and prognosis regarding various modes of auditory rehabilitation, as well as possible concomitant syndromic features. ACKNOWLEDGMENTS: This study was provided with bioresources from the National Biobank of Korea, Centers for Disease Control and Prevention, Republic of Korea (4845-301, 4851-302 and -307). This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1A1A1A05001472 to S.M.H., 2017M3A9E8029714 to J.J.S., 2014M3A9D5A01073865 to C.J.Y., 2018R1A5A2025079 to H.Y.G.) M.H.S., J.J., H.Y.G., and J.Y.C. designed the study conception. J.J, J.H.R., H.J.C., and J.S.L. performed the experiment. M.H.S., J.J., H.J.L., and B.N. analyzed and interpreted the data. M.H.S., J.J., H.J.L., B.N., H.Y.G., and J.H.R. wrote the article. The authors have no conflicts of interest to disclose. Received July 25, 2018; accepted March 2, 2019. Address for correspondence: Jae Young Choi, Department of Otorhino laryngology, Yonsei University College of Medicine, 50–1 Yonsei-ro, Seodaemun-gu, Seoul 120–752, Republic of Korea. E-mail: jychoi@yuhs.ac Address for correspondence: Heon Yung Gee, Department of Pharmacology and Brain Korea 21 Project for Medical Sciences, Yonsei University College of Medicine, 50–1 Yonsei-ro, Seodaemun-gu, Seoul 120–752, Republic of Korea. E-mail: hygee@yuhs.ac Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Improving Clinical Outcomes in Cochlear Implantation Using Glucocorticoid Therapy: A Review
Cochlear implant surgery is a successful procedure for auditory rehabilitation of patients with severe to profound hearing loss. However, cochlear implantation may lead to damage to the inner ear, which decreases residual hearing and alters vestibular function. It is now of increasing interest to preserve residual hearing during this surgery because this is related to better speech, music perception, and hearing in complex listening environments. Thus, different efforts have been tried to reduce cochlear implantation-related injury, including periprocedural glucocorticoids because of their anti-inflammatory properties. Different routes of administration have been tried to deliver glucocorticoids. However, several drawbacks still remain, including their systemic side effects, unknown pharmacokinetic profiles, and complex delivery methods. In the present review, we discuss the role of periprocedural glucocorticoid therapy to decrease cochlear implantation-related injury, thus preserving inner ear function after surgery. Moreover, we highlight the pharmacokinetic evidence and clinical outcomes which would sustain further interventions. ACKNOWLEDGMENTS: The authors have no conflicts of interest to disclose. Received October 8, 2018; accepted March 14, 2019. Address for correspondence: Cecilia Engmér Berglin, Department of Otorhinolaryngology, B53, Karolinska University Hospital, 141 86 Stockholm, Sweden. E-mail: cecilia.engmer-berglin@sll.se Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

The Effect of Hearing-Protection Devices on Auditory Situational Awareness and Listening Effort
Objectives: Hearing-protection devices (HPDs) are made available, and often are required, for industrial use as well as military training exercises and operational duties. However, these devices often are disliked, and consequently not worn, in part because they compromise situational awareness through reduced sound detection and localization performance as well as degraded speech intelligibility. In this study, we carried out a series of tests, involving normal-hearing subjects and multiple background-noise conditions, designed to evaluate the performance of four HPDs in terms of their modifications of auditory-detection thresholds, sound-localization accuracy, and speech intelligibility. In addition, we assessed their impact on listening effort to understand how the additional effort required to perceive and process auditory signals while wearing an HPD reduces available cognitive resources for other tasks. Design: Thirteen normal-hearing subjects participated in a protocol, which included auditory tasks designed to measure detection and localization performance, speech intelligibility, and cognitive load. Each participant repeated the battery of tests with unoccluded ears and four hearing protectors, two active (electronic) and two passive. The tasks were performed both in quiet and in background noise. Results: Our findings indicate that, in variable degrees, all of the tested HPDs induce performance degradation on most of the conducted tasks as compared to the open ear. Of particular note in this study is the finding of increased cognitive load or listening effort, as measured by visual reaction time, for some hearing protectors during a dual-task, which added working-memory demands to the speech-intelligibility task. Conclusions: These results indicate that situational awareness can vary greatly across the spectrum of HPDs, and that listening effort is another aspect of performance that should be considered in future studies. The increased listening effort induced by hearing protectors may lead to earlier cognitive fatigue in noisy environments. Further study is required to characterize how auditory performance is limited by the combination of hearing impairment and the use of HPDs, and how the effects of such limitations can be linked to safe and effective use of hearing protection to maximize job performance. ACKNOWLEDGMENTS: This work is sponsored by the US Army Natick Soldier Research, Development, and Engineering Center under Air Force Contract FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Army. Distribution Statement A: Approved for public release. Distribution is unlimited. C.J.S. designed and performed experiments, analyzed data, provided statistical analysis and wrote the article; P.T.C provided data analysis and wrote the article; A.P.D, J.P.P., T.P., and J.B. collected and analyzed data; T.F.Q. and M.M. provided contributions to conception of the work and critical editing; P.P.C provided editing and final approval of the version to be published. The authors have no conflicts of interest to disclose. Received June 4, 2018; accepted February 21, 2019. Address for correspondence: Bioengineering Systems and Technologies Group, MIT Lincoln Laboratory, 244 Wood St. Lexington, MA 02421, USA. E-mail: christopher.smalt@ll.mit.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Neural Indices of Vowel Discrimination in Monolingual and Bilingual Infants and Children
Objectives: To examine maturation of neural discriminative responses to an English vowel contrast from infancy to 4 years of age and to determine how biological factors (age and sex) and an experiential factor (amount of Spanish versus English input) modulate neural discrimination of speech. Design: Event-related potential (ERP) mismatch responses (MMRs) were used as indices of discrimination of the American English vowels [ε] versus [I] in infants and children between 3 months and 47 months of age. A total of 168 longitudinal and cross-sectional data sets were collected from 98 children (Bilingual Spanish–English: 47 male and 31 female sessions; Monolingual English: 48 male and 42 female sessions). Language exposure and other language measures were collected. ERP responses were examined in an early time window (160 to 360 msec, early MMR [eMMR]) and late time window (400 to 600 msec, late MMR). Results: The eMMR became more negative with increasing age. Language experience and sex also influenced the amplitude of the eMMR. Specifically, bilingual children, especially bilingual females, showed more negative eMMR compared with monolingual children and with males. However, the subset of bilingual children with more exposure to English than Spanish compared with those with more exposure to Spanish than English (as reported by caretakers) showed similar amplitude of the eMMR to their monolingual peers. Age was the only factor that influenced the amplitude of the late MMR. More negative late MMR was observed in older children with no difference found between bilingual and monolingual groups. Conclusions: Consistent with previous studies, our findings revealed that biological factors (age and sex) and language experience modulated the amplitude of the eMMR in young children. The early negative MMR is likely to be the mismatch negativity found in older children and adults. In contrast, the late MMR amplitude was influenced only by age and may be equivalent to the Nc in infants and to the late negativity observed in some auditory passive oddball designs. ACKNOWLEDGMENTS: The authors thank A. Barias and M. Wroblewski for helping with data collection, B. Tagliaferri for technical support, and W. Strange and R. G. Schwartz for advice on the design. This research was supported by NIH HD46193 to V. L. Shafer. V. L. S. oversaw the project, designed the experiments, and was involved in writing the article; Y. H. Y. helped with data collection, performed data analyses, and wrote wrote the initial draft in conjunction with V. L. S., and led the manuscript revision process;. C. T. helped with data collection and interpreting the language measures; H.H. and L. C. performed the early stages of the Mixed-Effect Modeling analysis in conjunction with Y. H. Y.; N. V. helped design the language background questionnaire and collect the data; J. G. helped collect the data; K. G. and H. D. helped design and pilot the electrophysiological paradigm and helped collect the data. All authors were involved in revising the article. The authors have no conflicts of interest to disclose. Received May 10, 2018; accepted January 24, 2019. Address for correspondence: Yan H. Yu, Department of Communication Sciences and Disorders, St. John's University, 8000 Utopia Parkway, Queens, NY 11437, USA. E-mail: yuy1@stjohns.edu and Valerie L. Shafer, Ph.D. Program in Speech-Language-Hearing Sciences, The Graduate Center, City University of New York, 365 Fifth Avenue, New York, NY 10016, USA. E-mail: vshafer@gc.cuny.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Impact of Lexical Parameters and Audibility on the Recognition of the Freiburg Monosyllabic Speech Test
Objective: Correct word recognition is generally determined by audibility, but lexical parameters also play a role. The focus of this study was to examine both the impact of audibility and lexical parameters on speech recognition of test words of the clinical German Freiburg monosyllabic speech test, and subsequently on the perceptual imbalance of test lists observed in the literature. Design: For 160 participants with normal hearing that were divided into three groups with different simulated hearing thresholds, monaural speech recognition for the Freiburg monosyllabic speech test was obtained via headphones in quiet at different presentation levels. A software manipulated the original speech material to simulate two different hearing thresholds. All monosyllables were classified according to their frequency of occurrence in contemporary language and the number of lexical neighbors using the Cross-Linguistic Easy-Access Resource for Phonological and Orthographic Neighborhood Density database. Generalized linear mixed-effects regression models were used to evaluate the influences of audibility in terms of the Speech Intelligibility Index and lexical properties of the monosyllables in terms of word frequency (WF) and neighborhood density (ND) on the observed speech recognition per word and per test list, respectively. Results: Audibility and interactions of audibility with WF and ND correctly predicted identification of the individual monosyllables. Test list recognition was predicted by test list choice, audibility, and ND, as well as by interactions of WF and test list, audibility and ND, ND and test list, and audibility per test list. Conclusions: Observed differences in speech recognition of the Freiburg monosyllabic speech test, which are well reported in the literature, depend not only on audibility but also on WF, neighborhood density, and test list choice and their interactions. The authors conclude that future creations of speech test material should take these lexical parameters into account. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Sascha Bilert, Tina Gebauer, Lena Haverkamp, Britta Jensen, and Kristin Sprenger for their support performing the measurements and categorizing the monosyllables per database. The authors also thank Daniel Berg for technical support and Thomas Brand for support on the SII predictions. English language support was provided by www.stels-ol.de. This work was supported by the Ph.D. program Jade2Pro of Jade University of Applied Sciences, Oldenburg, Germany. The authors have no conflicts of interest to disclose. Received October 14, 2017; accepted March 8, 2019. Address for correspondence: Alexandra Winkler, Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Ofener Straße 16/19, D-26121 Oldenburg, Germany. E-mail: alexandra.winkler@jade-hs.de Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Age-Related Changes in Temporal Resolution Revisited: Electrophysiological and Behavioral Findings From Cochlear Implant Users
Objectives: The mechanisms underlying age-related changes in speech perception are still unclear, most likely multifactorial and often can be difficult to parse out from the effects of hearing loss. Age-related changes in temporal resolution (i.e., the ability to track rapid changes in sounds) have long been associated with speech perception declines exhibited by many older individuals. The goals of this study were as follows: (1) to assess age-related changes in temporal resolution in cochlear implant (CI) users, and (2) to examine the impact of changes in temporal resolution and cognition on the perception of speech in noise. In this population, it is possible to bypass the cochlea and stimulate the auditory nerve directly in a noninvasive way. Additionally, CI technology allows for manipulation of the temporal properties of a signal without changing its spectrum. Design: Twenty postlingually deafened Nucleus CI users took part in this study. They were divided into groups of younger (18 to 40 years) and older (68 to 82 years) participants. A cross-sectional study design was used. The speech processor was bypassed and a mid-array electrode was used for stimulation. We compared peripheral and central physiologic measures of temporal resolution with perceptual measures obtained using similar stimuli. Peripherally, temporal resolution was assessed with measures of the rate of recovery of the electrically evoked compound action potential (ECAP), evoked using a single pulse and a pulse train as maskers. The acoustic change complex (ACC) to gaps in pulse trains was used to assess temporal resolution more centrally. Psychophysical gap detection thresholds were also obtained. Cognitive assessment included two tests of processing speed (Symbol Search and Coding) and one test of working memory (Digit Span Test). Speech perception was tested in the presence of background noise (QuickSIN test). A correlational design was used to explore the relationship between temporal resolution, cognition, and speech perception. Results: The only metric that showed significant age effects in temporal processing was the ECAP recovery function recorded using pulse train maskers. Younger participants were found to have faster rates of neural recovery following presentation of pulse trains than older participants. Age was not found to have a significant effect on speech perception. When results from both groups were combined, digit span was the only measure significantly correlated with speech perception performance. Conclusions: In this sample of CI users, few effects of advancing age on temporal resolution were evident. While this finding would be consistent with a general lack of aging effects on temporal resolution, it is also possible that aging effects are influenced by processing peripheral to the auditory nerve, which is bypassed by the CI. However, it is known that cross-fiber neural synchrony is improved with electrical (as opposed to acoustic) stimulation. This change in neural synchrony may, in turn, make temporal cues more robust/perceptible to all CI users. Future studies involving larger sample sizes should be conducted to confirm these findings. Results of this study also add to the growing body of literature that suggests that working memory is important for the perception of degraded speech. ACKNOWLEDGMENTS: We thank Paul Abbas for helpful suggestions on study design and data analysis, and Jacob Oleson for assistance with statistical analyses. We also acknowledge Wenjun Wang for help in developing the perception testing software. This study was funded by a Student Investigator Research Grant from the American Academy of Audiology (B. S. M.) and by an NIH P50 DC000242 grant. The authors have no conflicts of interest to disclose. Received June 21, 2017; accepted February 21, 2019. Address for correspondence: Bruna S. S. Mussoi, AuD, PhD, Kent State University, Speech Pathology and Audiology, A140 Center for Performing Arts, 1325 Theatre Drive, Kent, OH 44242, USA. E-mail: bmussoi@kent.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.

Ep. 2019-2.2: BONUS Interview with Samantha Kesteloot at the AAA 2019 Conference
In this bonus episode recorded at the 2019 American Academy of Audiology (AAA) Conference in Columbus, OH, D'Anne Rudden and Samantha Kesteloot take a deep dive into Samantha's early struggles as a child then a teenager with hearing loss and how she overcame those struggles in her pursuit of a career in audiology. Read the transcript here

Ep. 2019-2: Audiology Professional with Hearing Loss with Samantha Kesteloot
An audiology professional with hearing loss? It's inspiring what audiologists and AuD students with first-hand experience with hearing loss could contribute to patient care. Listen to this podcast with D'Anne Rudden and guest, Samantha Kesteloot, an outstanding third-year AuD student with hearing loss, as they explore Samantha's the trials and triumphs in becoming an audiologist while managing her own hearing loss. Read the transcript here

Ep. 2019-1: Storytelling with Dawn Heiman, AuD

Storytelling offers a unique way of engaging with patients in their hearing care. Explore effective strategies to get started with storytelling using social media with these experts, D'Anne Rudden, AuD and Dawn Heiman, AuD. Read the podcast transcript here.








Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

No comments:

Post a Comment

Collaboration request

Hi there How would you like to earn a 35% commission for each sale for life by selling SEO services Every website owner requires the ...