A new study finds that when hearing spoken Spanish, listeners who are fluent in Spanish Sign Language activate equivalent translations in their signed language, showing that linguistic comprehension not only transverses different languages, but different modalities.

When a bilingual person speaks one language, are they simultaneously accessing the other? Can the knowledge of one language interfere with the knowledge of another? Does a phonologically similar word in one language prime the accessing of the other? These questions have long been investigated in various linguistic experiments, but none like that of Villameriel, Dias, Costello, and Carreiras’ in 2015. In a study where cross-language activation was assessed in conjunction with cross-modality activation, Villameriel et al. examined if subjects could make semantic judgements about spoken Spanish word pair relations faster when the word pairs were also related in their second language, Spanish Sign Language (Lengua de Signos, LSE). Is there not only an activation of Spanish Sign Language when performing a task in spoken Spanish across the two different languages, but also across the different modalities of auditory and visual production? In particular, Villameriel et al. hoped to account for the age of Spanish Sign Language acquisition in their results, unlike similar studies had previously done. Because the age of language learning impacts mental lexicon organization and connections with other languages, hearing individuals who learned Spanish Sign Language as a native language from their deaf parents may differ in their language processing than those who later learned LSE as a second language. To account for potential activation differences, Villameriel et al. divided their study into two experiments with separate subjects – the first with hearing, native bilinguals of spoken Spanish and LSE with deaf parents, and the second with hearing spoken Spanish speakers and late learners of LSE. Both groups were made up of professional sign language interpreters and thus all proficient enough in LSE. A control group of monolingual spoken Spanish speakers was included for baseline results.

Both experiments involved the semantic relatedness paradigm, where subjects were asked to judge if two words in a pair had any level of semantic relation, including antonyms, synonyms, hypernyms, hyponyms, and associative relations. 64 word pairs were presented to the subjects in spoken Spanish audio recordings – 32 pairs were semantically related words and 32 were not. Within these two sets of pairs, half of the pairs had Spanish Sign Language translations that were phonologically related and half had Spanish Sign Language translations with no relation. Phonological relation involved sharing at least two of the parameters of handshape, orientation, movement, or location in production of the signs. Subjects responded via keyboard after hearing each pair to indicate if the word meanings were related in any way.

Figure 1: Examples of experimental stimuli. Spoken Spanish word pairs were semantically related or unrelated. Within this condition, the word pairs mapped to Spanish Sign Language translations that were phonologically related or unrelated.

Results demonstrated that despite different ages of acquisition, both native and late learners of LSE had a significant interaction between spoken Spanish and LSE relatedness. Both groups of LSE signers were quicker to respond to semantically related Spanish word pairs if their LSE translations were also phonologically related, demonstrating a facilitatory interactive effect of accessing both languages at the same time. Signers were also slower to respond to semantically unrelated words if their LSE translations were phonologically related, showing an inhibitory interactive effect when the two languages’ phonological and semantic relationship were unparallel. As expected, monolingual controls had no such effect, due to their lack of knowledge of the implicitly accessed Spanish Sign Language. These results suggest that bimodal bilinguals do activate their LSE signs while processing Spanish spoken words, as related spoken pairs with related translated signs were judged faster than those without related signs and vice versa for unrelated spoken pairs. Interestingly, there was no difference in the effects of the signed language translations depending on if the subjects were native or late learners of LSE, despite neural evidence that different ages of acquisition impact processing in different ways. This experiment was led in the subjects’ dominant language of spoken Spanish and the non-dominant signed language consequently inhibited or facilitated the dominant language judgments. While a dominant language may be expected to interfere with the processing of a learned, non-native language, in this experiment, the secondary language impacted the processing of the first.

Figure 2: Results from Experiment 1 with native bimodal bilinguals and hearing monolingual controls. Mean reaction time for bilinguals was significantly faster when words were both semantically related in spoken Spanish and phonologically related in signed Spanish, and significantly slower when semantically unrelated in spoken Spanish but still phonologically related in signed Spanish. There was no change in reaction time depending on semantic and phonological relation for monolinguals who did not know Spanish Sign Language, as expected.

This study is the first of its kind to reveal such a strong argument for parallel activation in hearing bimodal bilinguals. The robust interaction and activation between spoken and signed Spanish may be accounted for by the breadth of parameters used to instantiate phonological relation in signed pairs. A past study in spoken and signed German used either sign handshape or location to form a phonologically related word pair, both of which have been suggested to be less perceptually salient than similarities in movement. With handshape or location as parameters in the German sign language study, only the inhibitory, not facilitatory, effect of second language interaction was found. Other studies have investigated similar bimodality and cross-language activation, but found less validity when presenting English word pairs in the written form rather than spoken. Because deaf bimodal bilinguals link printed words to signs when learning how to read, they had stronger activations for this task than hearing bilinguals, who need to hear spoken English to experience a similar activation. Therefore, in Villameriel et al.’s study, it was important to present stimuli in the primary modality of the spoken language (speech) and match subjects in terms of language experience and ability. This made Villameriel et al.’s study noticeably different from those that came before.

With spoken and signed Spanish both utilizing a different primary modality, this study demonstrates that parallel activation cannot only occur across languages, but across modalities. Some theories suggest that signs are more readily available when using spoken language but spoken language is repressed when signing. This is based on the concept of code-blending, which occurs when hearing individuals simultaneously sign semantically equivalent information in the syntactic structure of spoken language when speaking, even when communicating with a non-signer, but do not vocalize their spoken language when signing. This theory may support the strong effects in this study of sign activation during spoken language tasks, and requires more investigation into the activation of spoken language during signing tasks.

Nowhere in the procedure were the subjects instructed to connect their judgements to signed translations, nor was their ability to sign even mentioned until after experiment was completed. The activation of LSE was implicit and naturally occurring, suggesting that separate languages regularly interact with each other without provocation. Because all bilingual subjects were professional sign language interpreters, however, it is important to investigate whether this claim can hold for the rest of the general population.

At its core, this study is the first of its kind to so strongly demonstrate not only a cross-language, but cross-modal activation for bilingual bimodals. Because signing and speaking are unable to overlap in phonological features, this evidence suggests that cross-language activation occurs at the lexical or semantic level of language. Not only do bilinguals activate different language lexicons and semantics simultaneously, they transverse the different modalities of language generation to do so, demonstrating the incredible processing power and adaptability of the brain in linguistic production and comprehension.

References:

Villameriel, S., Dias, P., Costello, B., & Carreiras, M. (2016). Cross-language and cross-modal activation in hearing bimodal bilinguals. Journal of Memory and Language, 87, 59-70.