Inter- and intra- hemispheric interactions in reading ambiguous words

While we know that regions in the two cerebral hemispheres have different roles in language processing, and specifically in processing written words, it is not yet clear what are the inter- and intra-hemispheric interactions during this process. When reading written words, orthographic, phonological and semantic representations are all activated (Dehaene, 2009, pp. 176–193; Dehaene et al., 2002; Harm & Seidenberg, 2004; Seidenberg, 2005). While phonological processes are thought to be mostly left lateralized, access to semantic representations is suggested to be more bilaterally distributed (Hickok & Poeppel, 2007). Furthermore, several lines of research suggest that processing of semantically ambiguous words relies to a greater extent on right hemisphere regions compared to unambiguous words (Bitan et al., 2017; Mason & Just, 2007; Peleg & Eviatar, 2008, 2009). Nevertheless, this pattern was shown to also depend on other properties of the word such as its phonological ambiguity (Peleg & Eviatar, 2008, 2009).

The aim of the present study was to investigate the directional interactions between regions in the two cerebral hemispheres during the reading of ambiguous and unambiguous words that differ in the mapping between orthography, semantics, and phonology. By utilizing effective connectivity analysis of fMRI data (collected for our previous study, Bitan et al., 2017), this study can provide deeper understanding of the relationships among these language components, and the neural mechanisms underlying reading.

Neurocognitive studies and models of reading agree on the set of left hemisphere (LH) regions taking part in single word recognition (e.g., Carreiras et al., 2014; Dehaene, 2009, pp. 176–193; Jobard et al., 2003). Nevertheless, the specific role played by different regions, which depends on their interactions with other regions in the network, is still unclear (Carreiras et al., 2014). For example, while it is agreed that the region in the left occipito-temporal cortex, labeled the visual word form area (VWFA) (Dehaene, 2009, pp. 176–193; Dehaene et al., 2002), is critical for orthographic processing in early stages of word recognition, there is a debate on the extent to which this region is affected by top-down phonological and semantic information during reading (Carreiras et al., 2014; Dehaene & Cohen, 2011; Price & Devlin, 2011). Furthermore, while there is evidence for differential involvement of dorsal and ventral aspects of the left inferior frontal gyrus (LIFG) in phonological and semantic processes, respectively (Bokde et al., 2001; Clos et al., 2013; Dehaene, 2009, pp. 176–193; Heim, Eickhoff, & Amunts, 2009; Price, 2012), it is unclear what is the main direction of information flow between these regions during word reading. On the one hand, some theoretical models propose that reading individual words prioritizes either the phonological or the semantic pathway (Frost, 2012; Rayner et al., 2001). On the other hand, other models suggest that both phonological and semantic representations are activated simultaneously and reciprocally interact with each other during reading (Dehaene, 2009, pp. 176–193; Seidenberg & McClelland, 1989).

Despite the strong left lateralization of the reading network, there is also evidence for the involvement of the right hemisphere (RH) in reading (Duncan et al., 2014; Seghier et al., 2011; Van der Haegen et al., 2012). Orthographic, phonological and semantic processes are thought to be differentially distributed across hemispheres during processing of written words (Lindell, 2006; Lindell & Lum, 2008; Vigneau et al., 2011). This is consistent with models of speech perception (Hickok & Poeppel, 2000, 2004; 2007), suggesting bilateral distribution of semantic processing in ventral areas, and left lateralized distribution of phonological processing in dorsal areas. Thus, given the differential involvement of right and left hemisphere regions in different aspects of word reading, we ask what the interactions between regions in the two hemispheres during reading are. Models of auditory processing suggest that LH regions suppress homotopic areas in the RH via transcallosal pathways (Galaburda et al., 1990; Nowicka et al., 1996; Nowicka & Tacikowski, 2011; Westerhausen et al., 2006). The concept of transcallosal inhibition (Netz et al., 1995; Selnes, 2000) was very influential in the research of language recovery following LH brain damage (Heiss et al., 2003; Naeser et al., 2005). Nevertheless, effective connectivity studies of oral language processing do not find support for this suggestion, and show excitatory inter-hemispheric connectivity between homotopic language related regions (Bitan et al., 2010; Chu et al., 2018). In the current study we examine inter- and intra-hemispheric connectivity during reading of single words.

It has been suggested that reading semantically ambiguous words relies on bilateral regions to a greater extent than processing of unambiguous words (Bitan et al., 2017; Mason & Just, 2007; Peleg & Eviatar, 2008, 2009). For English ambiguous words, multiple meanings are typically associated with a single phonological form (Gottlob et al., 1999; Perfetti & Hart, 2001), hence labeled homophonic homographs (e.g., ‘bank’ can refer to either a financial institution or a riverside). While there are more than 1500 homophonic homographs in English (Leinenger & Rayner, 2013), there are only between 20 and 100 heterophonic homographs (Carley, 2013; Gottlob et al., 1999), in which the multiple meanings also have different phonological form (such as in ‘tear’).

The Hebrew writing system is different in several aspects. It is written from right to left, and it is an abjad, in which written words mainly consist of consonants, and many of the vowels are omitted in writing. As a result, the Hebrew writing system is more opaque than many alphabetical writing systems, which may have some effect on the neurocognitive pathways involved in reading (Rueckl et al., 2015; Yael et al., 2015). Importantly, the omission of vowels from the written word result in around 23–30% heterophonic homographs (Bar-On et al., 2017, 2021; Shimron & Sivan, 1994). For example, the word "ספר" (spelled: SFR) can be read as either: /SEFER/ or /SAPAR/ (among other pronunciations), which refer to: ‘book’ and ‘barber’, respectively. Our previous behavioral and neuroimaging studies suggest that while the meaning of homophonic homographs are only resolved at the semantic level, heterophonic homographs are resolved at the phonological rather than at the semantic level (Bitan et al., 2017; Peleg & Eviatar, 2008, 2009, 2012). The goal of the present study is to utilize the unique characteristics of ambiguous Hebrew words to examine the connections between regions in the two hemispheres which are involved in phonological and semantic processing.

Behavioral studies on reading English homographs that used the Divided Visual Field technique have suggested that the multiple meanings of an ambiguous word are only briefly activated in the LH, quickly replaced by the dominant meaning alone, whereas, in the RH, multiple meanings are activated and maintained irrespective of their frequency (Faust & Chiarello, 1998; Jung-Beeman, 2005). These findings are in line with the coarse semantic coding hypothesis (Beeman, 1998; Jung-Beeman, 2005), which assumes a more diffuse semantic activation in the RH. Interestingly, while Hebrew homophonic homographs showed the same hemispheric differences described above, in Hebrew heterophonic homographs, the dominant meaning alone was activated when the word was presented to the LH (Peleg & Eviatar, 2009). These results suggest that homophonic homographs and heterophonic homographs are processed differently across hemispheres, and that pre-semantic phonological disambiguation of heterophonic homographs inhibits activation of its subordinate meaning (Peleg & Eviatar, 2009). This is suggested to be a result of hemispheric differences in the connections between orthographic, phonological, and semantic representations (Peleg & Eviatar, 2008, 2009, 2012). Thus, in the current study we compared the inter- and intra-hemispheric connectivity associated with the processing of two types of ambiguous words: homophonic and heterophonic homographs.

Previous neuroimaging studies showed enhanced activation in left, right or bilateral IFG in the processing of written ambiguous words and sentences as compared to unambiguous ones (Bitan et al., 2017; Mason & Just, 2007; Rodd, 2017; Rodd et al., 2005). Our previous study (Bitan et al., 2017) compared between reading Hebrew homophonic homographs, heterophonic homographs, and unambiguous words, in a semantic relatedness judgment task. We dissociated the first phase in which participants read the homograph in isolation (e.g., the homophonic homograph: “bank”), and the second phase when the target word was presented and participants judged whether it was related to the homograph, and the ambiguity was resolved. The target word could be related to the frequent (dominant meaning, e.g., “money”) or less frequent (subordinate, e.g., “river”) meaning of homographs, or not related at all (e.g., “cat”). When homographs were presented in isolation, there was greater activation in bilateral IFG pars orbitalis (ORB) for homophonic homographs than for heterophonic homographs (Bitan et al., 2017). Consistent with the role of ORB in semantic processing (Bokde et al., 2001; Dehaene, 2009, pp. 176–193; Heim, Eickhoff, & Amunts, 2009; Price, 2012), and with the bilateral distribution of semantic processes (Hickok & Poeppel, 2007), this finding suggests that reading homophonic homographs induces a conflict between semantic representations. In contrast, reading of heterophonic homographs resulted in greater activation in left IFG pars opercularis (L.OPER) compared to homophonic homographs. Consistent with the role of L.OPER in phonological processing (Bokde et al., 2001; Clos et al., 2013; Dehaene, 2009, pp. 176–193; Heim, Eickhoff, Ischebeck, et al., 2009; Hickok & Poeppel, 2000, 2004, 2007; Price, 2012), this finding suggests that reading heterophonic homographs induces a phonological rather than a semantic conflict. Very few studies have investigated inter-hemispheric connectivity between sub-regions in the left and right IFG (Seghier et al., 2011), and no study has investigated connectivity during the processing of ambiguous words.

The aim of the present study was to test the connectivity between brain regions in the two hemispheres associated with orthographic, phonological, and semantic processing. By using homophonic and heterophonic homographs, as well as unambiguous words, which differ in the mapping of orthography, phonology, and semantics, we aim to shed light on the relationships between these language components during reading. Furthermore, identifying the connections between regions involved in language processing in the two hemispheres, would increase our understanding of the differential role of the two hemispheres during reading.

In the present study we used Dynamic Causal Modeling (DCM) to examine the effective inter- and intra-hemispheric connectivity among six bilateral regions: LIFG pars opercularis (L.OPER) and pars orbitalis (L.ORB), and their right hemisphere counterparts (R.OPER and R.ORB), which were found to be differentially affected by the presentation of homophonic homographs and heterophonic homographs (Bitan et al., 2017); as well as the VWFA in the LH and its RH homologue (R.VWFA), which were found to be activated across all ambiguous and unambiguous words (Bitan et al., 2017). Connectivity was tested only during phase #1 of the original study, during which words were presented in isolation, without the disambiguating context.

For unambiguous words, we asked whether bottom-up projections from the VWFA would be predominantly directed to areas involved in phonological (L.OPER) or semantic (L.ORB, R.ORB) processing; and whether the VWFA receives top-down feedback from these frontal regions. We also asked whether the inter-hemispheric connections between homotopic areas are inhibitory. While the suggestion of the transcallosal suppression model for inhibition between LH language areas and their RH homologues (Galaburda et al., 1990; Nowicka et al., 1996; Nowicka & Tacikowski, 2011; Westerhausen et al., 2006), was very influential in interpreting indirect findings of language recovery (Heiss et al., 2003; Naeser et al., 2005), there is no direct evidence for this model from neuroimaging studies (Bitan et al., 2010; Chu et al., 2018).

Based on the findings that heterophonic homographs processing relied mainly on the L.OPER, associated with phonological processing (Bitan et al., 2017), we expected that the main direction of information flow during reading of heterophonic homographs will be from L.VWFA to L.OPER and from L.OPER to L.ORB, indicating that the resolution of the phonological conflict precedes activation of semantic representations. These connections are expected to be stronger in the LH than in the RH. In contrast, for homophonic homographs, which showed unique activation in bilateral ORB, associated with semantic processing (Bitan et al., 2017), we expected to find more direct connections from VWFA to ORB, in both hemispheres, which would be stronger than the connectivity between VWFA and OPER. This would indicate direct access to semantic representations for the resolution of the semantic conflict. We also expected to find inter-hemispheric connections between left and right ORB for homophonic homographs, consistent with the notion of bilateral distribution of semantic representations.

Finally, we tested the association between specific connectivity patterns during phase #1 and participants' reaction times to target words in phase #2. Previous behavioral findings (Burgess & Simpson, 1988; Faust & Chiarello, 1998; Peleg & Eviatar, 2009), as well as the coarse semantic coding hypothesis, suggest that for homophonic homographs, the subordinate meaning is more easily accessible in the RH compared to the LH. We therefore expected that connectivity from right to left hemisphere regions would predict faster responses to the subordinate but not to the dominant meaning of homophonic homographs. We did not expect such dissociation between the subordinate and dominant meanings of heterophonic homographs which are not expected to involve inter-hemispheric connectivity.

留言 (0)

沒有登入
gif