The primary focus of our research is how humans recognize speech sounds and spoken words. Three themes in our research are:
Speech recognition in face-to-face communication
We investigate the underlying behavioral and neural processes in situations when we only hear a speaker but also in face-to-face communication, when we hear and see a speaker. Whenever listeners see a speaker, they process the information that can be obtained about speech from the talking face. But what information does this visual speech contain and how is it used effectively?
Perceptual learning in speech recognition
The same speech sounds, and hence the same words, are produced differently across and within speakers. Listeners can overcome this variability to recognize speech but are nevertheless sensitive to it. Listeners can adjust to a speaker’s idiosyncratic way of speaking already from very little exposure. We investigate the underlying mechanisms of this perceptual learning.
Perceptual and cognitive factors in speech recognition
Listeners vary in their ability to recognize speech. We investigate whether and how perceptual (e.g., hearing) and cognitive factors (e.g., working memory) play a role in speech recognition and can hence explain individual differences. In this work, we often investigate speech recognition in more naturalistic situations, that is, with different types and levels of audible noise. A special focus in our work is on how the decline in perceptual and cognitive abilities that comes with aging may affect speech recognition.
Methods and Facilities
We use standard psycholinguistic measures (e.g., reaction times) to investigate the behavioral mechanisms underlying speech recognition. In particular, we often use eye tracking with the visual world paradigm. We also investigate neural mechanisms by analyzing ERPs collected with EEG. We use state-of-the-art psycholinguistic statistical methods (e.g., linear mixed effect modeling) and computational modeling.
Our lab facilities consist of 4 state-of-the art IAC sound-attenuating testing booths equipped for psychophysical research. Behavioral experiments are run in PsychToolBox under Octave. Another IAC booth hosts our own SR Research 2000 eye tracker. In addition, we have a video/audio recording lab and a workspace with work stations for high-end video/sound editing and data analyses with R. Hearing tests are conducted with our Grason Stadler audiometer and tympanometer. Our EEG experiments are run in the Department of Psychological and Brain Sciences’ own EEG laboratory (hosting a 64 (72 total) channel BioSemi Active 2 data acquisition system). We use EEGLAB and ERPLab for our data analyses.