Neural dynamics of variable-rate speech categorization

Stephen Grossberg, Ian Boardman, Michael Cohen
1997 Journal of Experimental Psychology: Human Perception and Performance  
What is the neural representation of a speech code as it evolves in time? A neural model simulates data concerning segregation and integration of phonetic percepts. Hearing two phonetically related stops in a VC-CV pair (V = vowel; C = consonant) requires 150 ms more closure time than hearing two phonetically different stops in a VC,-C 2 V pair. Closure time also varies with long-term stimulus rate. The model simulates rate-dependent category boundaries that emerge from feedback: interactions
more » ... tween a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech code is a resonant wave. It emerges after bottom-up signals from the working memory select list chunks which read out top-down expectations that amplify and focus attention on consistent working memory items. In VCi-C 2 V pairs, resonance is reset by mismatch of Cj with the C, expectation. In VC-CV pairs, resonance prolongs a repeated C. What is the nature of the process that converts brain events into behavioral percepts? An answer to this question is needed in order to understand how the brain controls behavior and how the brain is, in turn, shaped by environmental feedback that is experienced on the behavioral level. The nature of this connection also needs to be understood in order to develop neurally plausible connectionist models. Without it, a correct linking hypothesis cannot be developed between psychological data and the brain mechanisms from which they are generated.
doi:10.1037/0096-1523.23.2.481 fatcat:i5h5zys3avfklewvyjl4qdjvre