Text
Phonology in the bilingual and bidialectal lexicon
A conversation between two people can only take place if the words intended by each speaker are successfully recognized. Spoken word recognition is at the heart of language comprehension. This automatic and smooth process remains a challenge for models of spoken word recognition. Both the process of mapping the speech signal onto stored representations for words, and the format of the representation themselves are subject to debate. So far, existing research on the nature of spoken word representations has focused mainly on native speakers. The picture becomes even more complex when looking at spoken word recognition in a second language. Given that most of the world’s speakers know and use more than one language, it is crucial to reach a more precise understanding of how bilingual and multilingual individuals encode spoken words in the mental lexicon, and why spoken word recognition is more difficult in a second language than in the native language. Current models of native spoken word recognition operate under two assumptions: (i) that listeners’ perception of the incoming speech signal is optimal; and (ii) that listeners’ lexical representations are accurate. As a result, lexical representations are easily activated, and intended words are successfully recognized. However, these assumptions are compromised when applied to a later-learned second language. For a variety of reasons (e.g., phonetic/phonological, orthographic), second language users may not perceive the speech signal optimally, and they may still be refining the motor routines needed for articulation. Accordingly, their lexical representations may differ from those of native speakers, which may in turn inhibit their selection of the intended word forms. Second language users also have to solve a larger selection challenge—having words in more than one language to choose from. Thus, for second language users, the links between perception, lexical representations, orthography, and production are all but clear. Even for simultaneous bilinguals, important questions remain about the specificity and interdependence of their lexical representations and the factors influencing cross-language word activation. This Frontiers Research Topic seeks to further our understanding of the factors that determine how multilinguals recognize and encode spoken words in the mental lexicon, with a focus on the mapping between the input and lexical representations, and on the quality of lexical representations.
No copy data
No other version available