Mapping visual symbols onto spoken language along the ventral visual stream

JSH Taylor, Matt Davis, Kathleen Rastle, Apollo-University Of Cambridge Repository, Apollo-University Of Cambridge Repository
2020
Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. Following 2 wk of training, participants read the trained words while neural activity was measured with functional MRI.
more » ... esentational similarity analysis on item pairs from the same orthography revealed that right vOT and posterior regions of left vOT were sensitive to basic visual similarity. Left vOT encoded letter identity and representations became more invariant to position along a posterior-to-anterior hierarchy. Item pairs that shared sounds or meanings, but were written in different orthographies with no letters in common, evoked similar neural patterns in anterior left vOT. These results reveal a hierarchical, posterior-to-anterior gradient in vOT, in which representations of letters become increasingly invariant to position and are transformed to convey spoken language information. orthography | fMRI | representation | learning | reading R eading acquisition requires the brain to abstract away from the visual forms of written words to access spoken language information. This abstraction requires encoding distinctive information about each visual symbol (e.g., "d" has a circle to the left, and "b" has a circle to the right), but in a way that permits recognition irrespective of variations in case, font, size (1, 2), or position in a word (e.g., the b in Cab is the same as the B in Bad) (3). For skilled readers, this process culminates in an inextricable link between the perception of a word's visual form and the stored linguistic knowledge it represents (4). The current study delineates how representations along the ventral visual stream support this transformation. Neuroimaging research suggests that abstraction away from veridical visual form in reading is achieved by left ventral occipitotemporal cortex (vOT). Neural priming effects are observed in this region for cross-case (e.g., rage−RAGE) and locationshifted (e.g., #RAGE−RAGE#) written word pairs (5, 6). Patterns of activity across voxels in left vOT are also more similar for pairs of letters with the same abstract identity (e.g., R and r) than for letter pairs sharing visual, phonological, or motoric features (7). Dehaene et al. (8) proposed that, from posterior-toanterior left vOT, neural representations become increasingly invariant to retinal location and encode increasingly complex orthographic information. Supporting this, along this axis, left vOT shows a gradient of selectivity for the word likeness of written forms (9). Representations in middle-to-anterior left vOT also appear to be sensitive to higher-level language information (10-12). For example, this region shows masked neural priming effects for word−picture pairs that have the same spoken form and represent the same concept (e.g., a picture of a lion primed the word LION, and vice versa; ref. 13). However, while existing research implicates the left vOT in encoding important information during reading, the nature of the representations that support this process are not well specified. The current study used representational similarity analysis (RSA) of brain responses measured with functional MRI (fMRI) to delineate how the vOT processing stream encodes information about written words to support computation of higher-level language information. In particular, we sought to uncover how vOT represents letter identity and position, and the extent to which representations along this pathway come to capture word sounds and meanings. To do so, we trained participants for 2 wk to read 2 sets of pseudowords constructed from 2 different artificial orthographies. Each item had a distinct meaning and comprised 4 symbols, 3 representing the pseudoword phonemes and a final silent symbol. Phonemes and semantic categories were shared between the 2 orthographies and, for each participant, one orthography had a systematic mapping between the final symbol of each word and the word's semantic category (see SI Appendix, SI Methods for details). This allowed us to manipulate word form, sound, and meaning (Fig. 1) in a manner that would be hard to achieve in natural languages (however, see refs. 12 and 14). Following training, we examined the multivoxel patterns of fMRI responses (for an illustration of this method, see ref. 7) evoked when participants covertly retrieved the meanings of the newly learned written words (see Fig. 2 for scanning paradigm). Our analyses (see Fig. 3 for predicted models of similarity) sought to determine whether and how representations in vOT capture the separate orthographic, phonological, and semantic similarity across newly learned words. Significance Learning to read is the most important milestone in a child's education. However, controversies remain regarding how readers' brains transform written words into sounds and meanings. We address these by combining artificial language learning with neuroimaging to reveal how the brain represents written words. Participants learned to read new words written in 2 different alphabets. Following 2 wk of training, we found a hierarchy of brain areas that support reading. Letter position is represented more flexibly from lower to higher visual regions. Furthermore, higher visual regions encode information about word sounds and meanings. These findings advance our understanding of how the brain comprehends language from arbitrary visual symbols.
doi:10.17863/cam.51804 fatcat:hqp75m6axjdkfezinqnygm4yja